PC-JI-1-s2.0-S0885200623000765-main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Early Childhood Research Quarterly 65 (2023) 139–158

Contents lists available at ScienceDirect

Early Childhood Research Quarterly


journal homepage: www.elsevier.com/locate/ecresq

Computational thinking in early childhood education: The impact of


programming a tangible robot on developing debugging knowledge
Anastasia Misirli, Vassilis Komis∗
Department of Educational Sciences and, Early Childhood Education, University of Patras, Greece

a r t i c l e i n f o a b s t r a c t

Keywords: The present study investigates the debugging process of 526 children in preschool (4–6 aged) when programming
Computational thinking a tangible robot. When a child is involved in programming, it is needed to identify and correct errors, or in other
Programming words, debug a program. Debugging is a high-level thinking skill and is an essential component closely related to
Debugging
developing Computational Thinking in early childhood education. Though the debugging process is a cognitive
Educational robotics
function in programming investigated by many researchers in recent decades, most studies are conducted either
Early childhood education
in primary, secondary education, or higher education concerning beginners or experts in programming. Very
few reviews, and some focus on preschool education. Our study set out the following objectives: 1) How do
children identify the existence of error and locate it? 2) What types of errors emerge when coding? 3) What type
of error is complex for novice programmers? 4) What strategies are developed by novice programmers to debug
effectively? and 5) What types of knowledge do novice programmers construct by debugging the tangible robot
Bee-Bot? The study follows an iterative model of a design-based research approach. It uses multiple case studies
to collect qualitative and quantitative data. The programming intervention was implemented by 30 educators in
their classrooms and was based on a scenario-based teaching design. The debugging process was analysed using
Klahr & Craven’s framework (1988): 1) test and evaluate program, 2) identify bug, 3) represent program, 4)
locate bug and 5) correct bug. The main finding is the construction and development of syntactic and semantic
knowledge.

1. Introduction is a fundamental and high-level thinking skill for an evolving modern


era since it equips children with different levels of thinking and develop-
1.1. Computational thinking in early childhood education ment of deductive ability and avoids repetitive, fixed cognitive processes
(Wing, 2010). The educational benefits of CT involve the generalisation
The development of Computational Thinking (CT) is included of CT to various contexts outside of computer science while strengthen-
in computer science and, extensively, is combined with educational ing and improving intellectual abilities (Wing, 2010). Potentially, CT’s
robotics due to the common goals and objectives they share. In recent context of skills and abilities find application in all scientific areas due
years, the term CT has been introduced in the computer science research to the applicability of different problem-solving strategies and multiple
community and the computer science teaching community by Wing who levels of abstraction depending on the context in which they are embed-
suggested that CT skills are not only beneficial in computer science but ded (Barr & Stephenson, 2011).
also in many other fields as a skill that is as fundamental as being able In the US, many organisations working in parallel or exclusively for
to read, write, and do arithmetic (Wing, 2006). From the emergence education for the last decade, have published and reported on the as-
of the concept to the present day, an extensive literature has been de- pects of CT and its development in education. For example, the Interna-
veloped which studies the development of this term and the context in tional Society for Technology in Education (ISTE), advocates that CT is
which it can be developed. The concept of CT extends the term ‘algorith- an essential literacy for all students, combining four pillars — problem
mic thinking’ closely related to programming processes (Wing, 2006). decomposition, pattern recognition, abstraction and algorithms. It in-
Some years later Guzdial in his viewpoint on Association for Computing volves expressing solutions as a series of steps to automate a process and
Machinery (ACM) proposed that CT is the new literacy of the 21st cen- is linked to programming and coding because CT is the highest thinking
tury, which drawing methods from diverse disciplines would improve order of problem-solving. In 2019, in partnership with the Computer Sci-
teaching computing in higher education (Guzdial, 2008). Furthermore, ence Teachers Association (CSTE), they launched an effort to revise the


Corresponding author.
E-mail addresses: [email protected] (A. Misirli), [email protected] (V. Komis).

https://doi.org/10.1016/j.ecresq.2023.05.014
Received 22 January 2022; Received in revised form 22 May 2023; Accepted 30 May 2023
Available online 23 June 2023
0885-2006/© 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/)
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Standards for CS Educators, and CT aspects are included. Additionally, hardware/software systems, design process and debugging (Bers et al.,
the National Association for the Education of Young Children (NEYC) 2022, p. 12).
provides resources to help educators build on preschoolers’ math skills By summarising the findings of well-renown scholar works in early
(counting, pattern recognition, and sequencing to solve problems) to childhood education and in different periods, even though they ap-
support computational thinking. Moreover, many research projects are proach CT from different perspectives, it is commonly documented and
yet conducted by the National Science Foundation (NSF). CT is studied highlighted that concept of CT is based on the premise of the impact
from different educational perspectives and provides an evidence-based of problem-solving thinking, which mainly focus on the development of
understanding of the integration of computing in STEM teaching, learn- high-level thinking skills and is strongly related with the aspect of com-
ing and teachers’ professional development. puter programming in the area of computer science. All of the above
In the UK, the National Centre for Computing Education (NCCE), in organisations and scholar works show the significant contribution of
partnership with the Department for Education, focus on improving the CT, in developing children and students high-level thinking skills, par-
provision of computing education by providing teaching resources and ticularly abstract and problem-solving thinking. For present purposes,
professional development opportunities primarily in CT and many other we examine CT development through the lens of the debugging concept
aspects of computing and STEM education. Additionally, Computing at when engage with robotics and how it relates to the foundational skill
Schools (CAS) supports the teaching of CT concepts by providing edu- of problem-solving.
cational resources and opportunities for professional development.
Great contribution in the field was many scholar works that tried 1.2. The debugging process in computer science education
to approach CT through a developmentally appropriate framework. For
example Brennan and Resnick (2012) developed a framework to help In Computer Science the term symptom is often used to mean an ob-
access the development of CT and involves three key aspects: i) CT servable difference between the actual behaviour and the planned be-
concepts, ii) CT practices, and iii) CT perspectives, and within those haviour of a computer program or system. In IEEE standards this symp-
is included syntactic, semantic, schematic and strategic programming tom refers as a failure that needs a correction (IEEE, 1994). The term
knowledge. Many researchers support this approach to defining CT as defect is used to mean that aspect of the program’s design or imple-
the most appropriate framework for studying various aspects of CT in mentation that will result in a symptom. The term of bug is synonym
programming instruction (Lye & Koh, 2014; Kong, 2019). A year later, of a computer program’s error or of a software defect (Metzger, 2004;
Grover and Pea (2013) introduced nine essential elements of CT prac- Spinellis, 2017). In software development and computer programming,
tices that should form the basis of curricula for learning and assessing the term of debugging concerns the process of finding and resolving
its development. A framework aimed at introducing CT concepts and bugs (defects or problems that prevent correct operation) within com-
more orientated to 6–12 ages was proposed by Angeli et al. (2016). This puter programs. The debugging requires the user to decentralize, sep-
was identified with five skills to promote CT. In all these works CT is arate and differentiate the two different roles and behaviours that are
approached as a set of skills that can be developed in children through carried out when setting up a program (Kurland et al., 1989). The dif-
programming. ferentiation lies, on the one hand, on the role of the programmer and
At an early stage of works there were some researchers presented a his knowledge and intentions in coding a program, and on the other on
suggestion of collaboration between computer science researchers and the role of the computer or any machine for the program which it is able
high school teachers on how the latter would integrate computing and to read (Kurland et al., 1989). This particular behaviour is recorded as
CT into their courses (Allan et al., 2010) and some others that provided a common problem of the novices in programming (Pea, 1986). Both
ways of integrating CT concepts in high school in-school and after-school novices and experts programmers, not necessarily at the same level,
contexts (Lee et al., 2011). On the other hand some others suggested make mistakes when writing programs. Recognition of errors requires
that CT should be embedded in curricula and teaching practice of K- the establishment of debugging strategies, which more likely would lead
12 educators and administrators, and great emphasis should be given to their correction. Every debugging strategy consists of three parts: a
on professional development courses on this subject for in-service and set of assumptions, a control structure and an evaluation mechanism
pre-service educators (Yadav et al., 2016). Moreover, other factors that (Metzger, 2004). A debugging strategy complies with the strategic de-
can affect the acquisition of CT skills are reported in the first grades cisions regarding the detection and the correction of programs’ faults.
of primary such as curricular content and implementation (Arfé et al., In modern programming languages, various tools have been developed
2019; Relkin et al., 2021). Mukasheva and Omirzakova (2021) consider to help detect and control errors in program code. From the computer
CT as a holistic and multifaceted process that contributes to the explo- science point of view, the programmer creates a flaw in the program
ration of the semantic relationship between learning programming and code (also known as a bug or fault). This flaw causes a faulty condition
CT. Other researchers approached CT from a humanities point of view in the program state, and this infection causes a failure – an externally
and explored how CT and arts learning can complement each other in observable error. In other words debugging is perceived as the cognitive
an informal learning context, whilst young people can be engaged with model of conceptualizing programming (Fincher & Robins, 2019, p.12).
culturally important stories in a meaningful way using coding (De Paula The debugging process is one of the key components of CT describ-
et al., 2018). ing ‘the mental activity in formulating a problem to admit a computational
In their review, Lye and Koh (2014) presented that most coding solution’ as operationalised through humans and machines (Wing, 2008;
and CT studies were conducted in higher educational settings. Only Wing, 2010). In computer science education, correcting program errors
a quarter of hundreds involved early years through 12th-grade stu- is called debugging and is one of the key aspects of the educational ap-
dents. Lately, a working paper published by OECD and authored by proach to the Logo programming language. In the Logo tradition, both
Bers et al. (2022) provides essential data by setting the state of the programming and robotics are seen as problem-solving processes and
field of CT in early childhood education. Computational Thinking is pre- developing the subject’s cognitive structures through communication
sented as the broader circle - on a total of three: Computer Science and between the student and the computer or robot. For Papert, the process
Computer Programming - that ‘encompasses a set of skills involved in of debugging constitutes for the learner a routine of understanding the
constructing and/or decomposing the sequential steps of a task so that function of a program without being right or wrong, but moreover to un-
it can be carried out by a computer’ (Bers et al., 2022, p. 9). To that end, derstand if it is fixable (Papert, 1993, p. 23). In that sense learning, goes
computer programming is a fundamental aspect of computer science in- beyond the cultural notion of a ‘right’/‘wrong’ articulation of knowledge
volved in the development of CT. According to Bers’s previous work in and moves to a more constructive approach to learning where there is
2018b, which applies to the working paper from OECD, the basic CT no fear about ‘being wrong’. The student’s selection and analysis of the
concepts are algorithms, modularity, control structures, representation, problem, the design and formulation of a solution in a formalistic lan-

140
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

guage with different types of data representations, the program’s execu- ity of commands) (Cuneo, 1986). Despite the fact that early learning
tion, the analysis of the outcome, and the correction of program errors environments increasingly incorporate coding curricula and CS stan-
constitute the basic framework of the Logo culture and to the idea of dards, there is little about programming and debugging knowledge in
constructing components to favour modularity which in turn leads stu- early childhood education (Wang et al., 2021). However, we can iden-
dents to learn the powerful idea of ‘state’ (Papert, 1993, p. 60). The idea tify a few studies in early childhood education (ages 4–8) focused on
of ‘state’ answers to the question how a machine is programmed yet be- debugging and robotics education either using tangible or visual-block
haves similarly. Different users may program differently but with the programming or Logo-based computer programming (Bers et al., 2014;
same outcome for the machine. Fessakis et al., 2013; Flannery & Bers, 2013; Heikkila & Mannila, 2018;
Finally, from a psychological point of view, debugging is received as Newhouse et al., 2017; Pugnali et al., 2017; Strawhacker et al., 2018;
a diagnostic test for ‘symptoms’ displayed on a programme, which would Sullivan & Bers, 2013). Firstly, Fessakis et al. reported that children
help the programmer to their identification and fixation (Hoc et al., could be introduced to basic computer programming concepts using a
1990, p.55). Especially, the Logo environment was designed to cultivate Logo-based computer environment (Fessakis et al., 2013). Applying a
debugging as a cognitive function (Solomon & Cambridge, 1986). The hybrid programming environment with concrete robots and computer
emphasis is much more on debugging a child’s ideas than on debugging visual-block programming Flannery & Bers pointed out distinctive dif-
the actual code (Solomon et al., 2020). Detecting errors in a program ferences of debugging strategies in early childhood children’s reason-
is a significant learning and self-regulation activity based on observing ing as this was categorised in three cognitive development measures
differences between expected and observed results. Therefore, debug- with older ones to demonstrate more sophisticated programming ex-
ging process through the pedagogical lens of Logo the has a broader pertise (Flannery & Bers, 2013). Nevertheless, when children use tangi-
dimension. It is not just about composing correct programs but aims to ble robots in free play, no organised problem-solving behaviours could
develop the cognitive aspects of debugging as a fundamental learning be developed that could result in a concrete understanding of pro-
and self-regulation activity based on observing differences between ex- gramming and debugging (Newhouse et al., 2017). Furthermore, Pug-
pected and observed results. Throughout the debugging process learners nali et al. emphasised that debugging is the most challenging concept
develops behaviour patterns such as perseverance and self-regulation amongst other concepts of CT but can be highly mastered with tangi-
(Bers, 2018a). ble robotic kits such as KIBO rather than visual-block graphic program-
On the other hand, in the context of teaching programming the de- ming environments such as ScratchJr (Pugnali et al., 2017). However,
tection and correction of errors (mainly logical and less syntactic er- Strawhacker et al. (2018) reported that children ages 5–8 indicated mid-
rors) holds a dominant position: finding and correcting errors is a nec- to-high levels of ScratchJr programming comprehension on assessment
essary cognitive and teaching process approached by many researchers tasks on programming concepts and skills of symbol decoding, sequenc-
but initially is focused on older children and students and tried to com- ing, debugging, and goal-orientated programming. Especially for the de-
pare or highlight debugging strategies established either by novices or bugging concept it was observed younger children displaying poorer
between novices and experienced programmers (Kurland et al., 1989; comprehension than older ones.
McCauley et al., 2008; Murphy et al., 2008; Kim et al., 2018). Although Gomes et al. (2018) used digital games to examine the ef-
Other researchers are more focused on how debugging could be fects of their interactional affordances for teaching computing concepts
taught in higher education (Whalley et al., 2021). A considerate volume on children’s aged 5–7 the interesting evidence regarding debugging was
of work investigated student’s misconceptions in relations to the reasons that children developed the strategy of erasing the entire program and
the bugs occur and how programming course can be improved towards starting from scratch rather than trying to fix the program. Furthermore,
the teaching of debugging (Luxton-Reilly et al., 2018; Rich et al., 2019; Relkin et al. (2021) found that the group of students who did not receive
Swidan et al., 2018; Albrecht & Grabowski, 2020; Emerson et al., 2020; the CAL curriculum, which includes the aspect of debugging for the de-
Johnson et al., 2020). In order for educators to understand how novice velopment of CT, scored higher than the CAL group; it is assumed that
programmers interpret debugging process in Logo they need to take into is more likely that problem-solving skills was learned through another
account the level of programmers’ knowledge of the syntactic structure school subject that increased NO-CAL group’s debugging skills.
(the rules for combining elements into commands and the rules for com- Teaching debugging using tangible robots and coding toys is sug-
bining commands into programs or program segments), and the seman- gested as an appropriate developmental context for early childhood ed-
tic structure of a programming environment (operations, objects and ucation (Angeli & Valanides, 2020; Bers, 2018a; Flannery & Bers, 2013;
locations) (Fay & Mayer, 1988). Silvis et al., 2021). For young children, debugging using tangible toys
involves solving problems at the program level and physical materials
1.3. Debugging in early childhood education (Fields et al., 2016). However, most of these devices use a logo-like
programming language, which encompasses a complexity of syntactic
The programming activity and behaviour of early childhood children knowledge of external modalities (buttons, tiles, blocks) that children
during the design of an algorithm and writing of a program (code) is di- encounter in their learning to code and debugging (Silvis et al., 2021).
rectly related to the process of ’debugging’. The process in computer Similar evidence is reported by Bers et al. (2014)) in a tangible-graphical
science, particularly in computer programming, is called ’debugging’ programming environment where the discovery of errors may lead chil-
and is a high-level thinking skill. Debugging is a fundamental concept dren to either decide to keep their original goal or switch to an ap-
even when early childhood students learn programming, and it will be propriate alternative and in this stage children’s ability was reported
of great value if CT starts in early childhood education (Wing, 2010). higher in activities for simpler than complex modalities of a robotic sys-
Kong shows in a review conducted in 24 senior primary school edu- tem (Bers et al., 2014). Furthermore, concurrent physical and program-
cation studies that testing and debugging are suggested in seven stud- ming bugs present opportunities for young children to learn about the
ies’ core CT aspects (Kong, 2019). While much of recent scholarly broader computational system in which they are learning to construct
work deals with the knowledge and CT of novice programmers, there learning models of their debugging strategies and the types of bugs they
has been less emphasis on how very young children learn to debug. encounter (Silvis et al., 2021). In line with this evidence is Nusen & Sip-
At an early stage, Cuneoexamined the debugging strategies of young itakiat’s work, when introduced the tangible robot Robo-Blocks which
children in a turtle logo-like programming environment without previ- allowed children to step through each instruction and observe the re-
ous experience in computer programming. Although the feedback and sulting action, thus making programming accessible to young children
practice children had received in the immediate conditions, they pre- (Nusen & Sipitakiat, 2011).
sented difficulty in fixing incomplete programs (missing commands) Additionally, some researchers investigated debugging through the
than fixing a program in an appropriate form (sequence or suitabil- lens of teachers and their subject knowledge of programming or teach-

141
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Fig. 1. The cognitive model of debugging pro-


cess according to Klahr and Craven (1988).

ing style. For example, Kim et al. (2018) were interested in understand- Our work contributes to the field of CT and tangible programming
ing the types of errors and patterns educators developed for fixing their by demonstrating how debugging can be analysed in classifications and
programs and how they might affect their teaching in block-based pro- play an essential role in the learning process of fostering CT abilities.
gramming. Strawhacker et al. (2018)) reported mid-to-high levels of This study provides typologies of debugging practices focused on devel-
programming achievement in students whose educators demonstrated oping syntactic and semantic/logical knowledge and their effectiveness.
flexibility in lesson planning, responsiveness to student needs, tech- Furthermore, results demonstrate the importance of classified debug-
nological content expertise, and concern for developing students’ in- ging strategies and behaviours to increase preschool children’s expo-
dependent thinking. To this end, Lee and Junoh (2019) provided de- sure to coding and build upon an understanding of the debugging pro-
velopmentally appropriate unplugged coding activities for in-service cess more systematically. Moreover, the study suggests teaching devel-
teachers in early childhood education that presented practical examples opmentally appropriate robotic and programming-based activities that
of tasks to enhance CT abilities, especially on algorithmic design and further promote the development of computational concepts and think-
debugging. ing in tangible programming.
However, Sipitakiat and Nusen (2012) examined how debugging can
be incorporated into a tangible programming system and applied Klahr’s 2. Material and methods
framework for assessing debugging abilities. They added new design as-
pects to the robots they developed and provided clues that helped chil- 2.1. Participants
dren better understand, locate, and solve the problems by showing how
step-by-step control allowed children to better articulate and pinpoint Participants in this study were 526 young children from urban and
errors in the program. We followed similar steps with Sipitakiat and suburban public schools and local mainstream preschools, with a mean
Nusen (2012) and Rich et al. (2019), but mainly what was proposed age of 5,18 years (4.3 – 6.5, SD=0,4655). The sample comprised 256
early on by Carver and Klahr (1986) modelling the specific steps of de- males and 270 females, with 167 younger (4–5 years old) and 359 older
bugging for children (Mayer, 1988, p.33). Thus, we adopted for our (5–6 years old), given that in the Greek educational system, both ages
study Klahr & Carver’s debugging framework, which approaches debug- are blended in each class.
ging through five distinct and interrelated phases: 1) Test and evaluate Thirty (30) preschool educators from mainstream public schools
a program, 2) Identify bug, 3) Represent program, 4) Locate bug and trained to implement robotics and programming in their classes follow-
5) Correct bug (Fig. 1). In the first phase, the user tests the program ing a teaching approach based on a scenario-based design with addi-
and checks by comparing the initial design of the program with its per- tional research instruments to collect data. They were instructed to as-
formance in the robotic system. If a discrepancy between design and sess children’s prior and post-mental representations and subject knowl-
performance is observed in the behaviour of the particular robotic sys- edge of robots and programming by interviewing them individually fol-
tem, then the user should proceed to Phases 2–5. In that case, the user lowing the script of open-ended questions (Appendix 1). That was highly
should first the user should recognise that there is an error (bug), then innovative for educators’ theoretical knowledge about summative as-
reproduce the program to investigate the possible location of the error sessment then. According to those reports, children had no prior knowl-
(bug) and thus lead to the detection of the error (bug) and finally correct edge of robots and programming. Though these results are not included
it. in the present paper, we can report algorithmic thinking and program-
As stated by Papert (1993) ’the process of debugging is a normal ming ability evolution. The ’ Instruments and Data collection ’ section
part of the process of understanding a program’ (p.61). In this study, we provides more details regarding the research design and instruments
identify how novice programmers’ debugging process matches the trou- used.
bleshooting framework proposed by Klahr and Craven (1988) and how
we use that framework to explain debugging. Specifically, we aim to an- 2.2. Recruitment
swer the following questions: 1) How do children identify the existence
of error and locate it? 2) What types of errors emerge when coding? 3) The study was part of the FP7 European Project’ The Fibonacci
What type of error is complex for novice programmers? 4) What strate- Project-Disseminating an Inquiry Based Science and Mathematics Ed-
gies are developed by novice programmers to debug effectively? and 5) ucation (ISBME)’ which had the ambition to contribute to the dissemi-
What types of knowledge do novice programmers construct by debug- nation of IBSME throughout the European Union. The Fibonacci project
ging the tangible robot Bee-Bot? defined the dissemination process starting with 12 Reference Centres

142
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

and 24 Twin Centres. The Department of Educational Sciences and Early be included under the case study method (Yin, 2009). These are ap-
Childhood Education, especially the Laboratory of Science, Mathemat- proached from mixed research methods and involve three iterations,
ics and ICT Education of the University of Patras in Greece, was one of collaborative partnership between researchers and educators and tac-
the twenty-four Twin Centres responsible for training educators on how tic impact on the educational process. Hence, as the DBR approach
to plan their teaching following the enquiry-based approach. To achieve supports, this research applies mixed methods from quantitative and
that, the Twin Centre of the University of Patras established its initial qualitative data and instruments (Fig. 2). A concurrent triangulation
Teacher-Network by organizing an open meeting where the philosophy approach is applied, which ‘provides quantitative statistical results fol-
and the usefulness of the project, as well as the actual work plan pre- lowed by qualitative quotes that support or disconfirm the quantita-
sented to preschool teachers and other stakeholders. In that event, the tive results’ (Creswell, 2009). This study utilised the Logo-like tangible
educators who were willing to participate in the Teacher-Network got robot Bee-Bot and a scenario-based design with three scenarios (teach-
familiar with the enquiry-based approach in 3 different scientific areas: ing sequences), following pedagogical and developmentally appropri-
i) Science, ii) Mathematics, and iii) ICT and were meant to be trained ate methodological criteria to organise the learning in programming
to integrate them into their teaching practice through a series of pro- through three different learning areas of the curriculum: i) spatial rea-
fessional development pieces of training and workshops that the project soning, ii) measurement, iii) memory/iteration. Different research in-
offered. Moreover, they were supported by the research team during struments were developed and introduced to children through the dis-
the implementation phase in their classrooms. At the same time, they, tinct phases of an educational scenario concerning the data collection
in turn, contributed to the preparatory phase of the new members of the and analysis techniques (Fig. 2). All these instruments were tools for
Teacher-Network by sharing with them their experiences and expertise educators to record and evaluate the learning process. Mainly, all the
in the meetings. data emerged from the post-test interviews, which were structured in-
terviews with two different instruments reporting on mental representa-
2.3. Professional development tions and subject knowledge (Appendices 1-4). The interviews were used
individually to assess (prior) and evaluate (post) the transformation of
Educators attended a three-hour training led by the researchers of children’s mental representations and consolidation of CT’s skills. Fur-
this study. The training was planned in two parts. In the first part, ed- thermore, data on debugging was derived from the richness and range
ucators were familiarised with the principles of Inquiry Science-Based of the programming profiles that emerged in this study since no teach-
Education with an emphasis on robotics and tangible programming. Pro- ing activities or assessment of the debugging process were not included
gramming concepts were approached through maths (spatial reasoning, in the scenarios implemented.
measurement and patterns/iteration) to show how they are embedded
within the national curriculum. In addition, this part provided introduc- 3.2. Scenario-based design for teaching and learning programming
tory information on the robotic tools, focusing mainly on the tangible
robot Bee-Bot and its affordances in early childhood education to teach At this point, we should describe what a scenario-based design is.
programming. In the second part, educators participated in hands-on A scenario-based design for teaching and learning alongside the tangi-
play with the robot. They were introduced to the scenario-based de- ble robot Bee-Bot is used to develop children’s abilities of CT (Misirli
sign in early childhood, the pedagogical principles and methodology, & Komis, 2014; Komis & Misirli, 2015; Komis et al., 2017). The con-
and this was based on and how these were formulated in activities. ceptualisation of a scenario-based design is based on constructionism
Teachers were given the opportunity to implement the scenario as it pedagogical approaches such as project-based learning, child-centred
was planned for their classrooms, understand how to implement the re- learning, problem-solving learning environment, collaborative learning,
search instruments and their importance for data’s validity, share pos- and scaffolding and reflection process. It includes seven parts: 1.Iden-
sible drawbacks and propose solutions altogether as a group. Further- tification of the teaching subject, 2.Children’s prior mental represen-
more, teachers were given hard copies of the scenario, the assessment tations and subject knowledge, 3.Learning goals, 4.Teaching activities,
instruments, and the kit for implementation (robots, cards of commands- 5.Artifacts and material, 6.Children’s post mental representations and
pseudocode, mats, pictures and other additional resources). The training subject knowledge, and 7.Documentation. This model constructs a lin-
slides, the scenario and assessment instruments were also provided in ear model for designing a full teaching intervention based on robotics
a digital form. Educators were given ongoing professional development and presents different planning and implementation instances (Misirli
and support through in-person assistance and role-modelling from re- & Komis, 2014).
searchers in their classrooms or phone calls and meetings. Additionally, Especially for part 4 (Teaching activities), the idea was to have two
a log was created to keep track of which classes delivered assessments sections. The first section of activities was designed to be focused on
and when and when they were planned to finish the implementation robotics concepts (function, control and operation of the robot), around
and organise the rotation of kits’ distribution. two basic programming strategies: i) ‘step-by-step’ and ii) automated.
The first strategy is highly recommended to help students in synchro-
3. Procedure nise their thinking better with the actual movement of the robot and
the program being executed (Nusen & Sipitakiat, 2011). The gridded
3.1. Instruments and Data collection mats we designed supported this strategy as the length of each square
corresponds to the distance Bee-Bot moves on one linear step. Hence,
The present study followed the design-based research (DBR) method- children, by moving the robot on the grid from one location to the
ology, which aims to improve educational practices through systematic, other, formulated in an abstractive way the program meant to be built.
flexible, and iterative review, analysis, design, development, and im- The second section integrated these concepts into activities but focused
plementation, based upon collaboration amongst researchers and prac- on mathematical and programming concepts (spatial reasoning, mea-
titioners in real-world settings leading to design principles or theories surement and memory/iteration). In addition, material (gridded mats of
(The Design-Based Research Collective, 2003; Wang & Hannafin, 2004). 15 cm squares, cards/set of commands, themed pictures) was developed
Thus, an iterative model of a developmental DBR approach through a and provided to educators in a kit. However, when using the Bee-Bot,
three-year implementation, analysis, and redesign was developed. It in- the children’s programme was not visible as stored internally in the de-
volved multiple collaborative iterations (Anderson & Shattuck, 2012). vice. Hence, we introduced the set of commands in cards, which worked
It used multiple case studies to collect qualitative and quantitative de- as a ‘pseudocode’ (Fig. 3). Initially, the ‘pseudocode’ is a written list of
velopmental data (Kelly et al., 2008) within the same methodologi- actions in a numbered sequence. However, since in the national cur-
cal framework without a distinction to be made. It was considered to riculum, children at that age are not systematically taught writing, this

143
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Fig. 2. The research design and instruments.

given problem and consequently led them to organise a strategy on how


to proceed. As observed, most children took a step further by simultane-
ously moving the robot (off mode) and modelling the solution of a prob-
lem by indicating it with their finger around the mat as they verbalised
the algorithm (Misirli & Komis, 2014). Coding a program by using ‘pseu-
docode’, meant that children had to design and follow the particular
sequential programming structure for the robot Bee-Bot (Fig. 4). The
Fig. 3. ‘Pseudocode’: visual representation of a programme.
‘pseudocode’ was the medium to support the visualisation and reflection
of this process (Brusilovsky, 1993). And as it is suggested by Wilson and
Moffat (2010) the ‘pseudocode’ transforms the Logo programming lan-
guage into a visual medium for preliterate children, it benefits both the
type of ‘pseudocode’ was a means for visually representing programs
cognitive and affective sides of learning, and it facilitates the develop-
and facilitating their building on syntactic and semantic knowledge.
ment of syntax knowledge since the child can grasp the fundamentals
Additionally, each programme was instructed to be set in ‘pseu-
principles before the typical learning of written language.
docode’ in vertical order on the side of the mat so that children could
compare the actual programme to the robot’s movement and make con-
nections between these modalities. Moreover, the ‘pseudocode’ proved 4. Results
to be the medium for researchers to capture children’s coding activities
and programs and get accurate data. In this section we will present the procedures for developing the
Each scenario included 8–10 activities, lasting approximately 30–45 ability to debug programs in preschool children. In particular, the in-
minutes each, and was planned to last 2–3 weeks depending on each dividual objectives set, are similar to the types of errors encountered
classroom’s organisation and size. Every day one activity was scheduled in the preparation of a program and the strategies developed by young
to be implemented. Initially, children were assigned by their teachers children when a) they recognize the existence of an error and then (b)
in permanent groups throughout each scenario. In the beginning, ed- are involved in its detection and finally (c) correct the error, thus mod-
ucators administered pre-testing to assess children’s prior mental rep- ifying a program and then (d) retesting it. These are also the stages in-
resentations of the robot’s properties and operational features individu- volving a complete debugging process (Klahr & Carver, 1988). Creating
ally per child (Appendix 1). The instrument included eleven open-ended appropriate variables, we are going to present how far preschool chil-
questions and one close-ended one. The first three questions were about dren are reaching, by seeking to explore the quality of the programs at
properties features and gradually became more specific on each of the the level of syntactic and semantic/logical knowledge. These variables
operational features of the robot (function and control). After each sce- were adapted to the Klahr & Carver’s conceptual framework we used
nario, educators administered two post-assessment instruments: i) the for examining children’s debugging process, so the types of errors were
same instrument that was administered for prior mental representation divided into two broad categories as follows: a) syntax errors and b) se-
and at this stage used for trailing final representations (Appendix 1) and mantic and logical errors. According to these two categories, we have
ii) an evaluation activity on subject knowledge (Appendices 2, 3 & 4). designed corresponding variables with individual categories to create
Children throughout each scenario activities were asked when pro- on the one hand typologies for each type of error and on the other hand
gramming to follow this order of strategies: a) verbalise the algorithm for the overall debugging process.
(‘loud thinking’, Papert, 1993, p.197), b) transfer the algorithm to a pro- The variables obtained to record and capture the individual cogni-
gram by using ‘pseudocode’, c) insert the program into the robot and d) tive processes of an integrated debugging process were six (06), analysed
execute and test the program. According to Vygotsky (1978), thought and organised into twenty seven (27) modalities. The lowest value re-
is considered as ‘inner speech’ and is the result of language. Children, flects the incomplete (non-complete) behaviour or sub-procedure and
by verbalising an algorithm, or in other words, making a ‘loud thinking’ the highest reflects the complete behaviour of the relevant variable
reinforced their ‘inner speech’ towards shaping the desired solution to a (Table 1). In particular, they have been organised in linear evolution-

144
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Fig. 4. Sequential programming structure.

Table 1
Variables, modalities, frequencies and relative frequencies (N=526).

Variables Modalities f rf

1. Number of attempts 1=V1E_NumberOfMoreTries 37 7.034


when debug 2=V1E_NumberOfTries_3_4 40 7.605
V1_EvalProg_NumberOfTries 3=V1E_NumberOfTries_1_2 449 85.361
1=V2E_TofESyntSemLog 36 6.844
2. Types of errors 2=V2E_TofESemLogic 184 34.981
V2_EvalProg_TypesOfErrors 3=V2E_TofESyntax 22 4.183
4=V2E_NoErrorsAtAll 284 53.992
3. Type of error Syntax 1=V3E_SyError_CLEARGO 20 3.802
V3_EvalProg_Syntax Errors 2=V3E_SyError_CLEAR 25 4.753
3=V3E_SyError_GO 13 2.471
4=V3E_NoSyError 468 88.973
4. Type of error 1=V4E_SemLogError_TurnOriDir 70 13.308
Semantic-Logical 2=V4E_SemLogError_PosNum 3 0.57
V4_EvalProg_SemanticLogicalErrors 3=V4E_SemLogError_TurnNumDir 12 2.281
4=V4E_SemLogError_TurnOri 34 6.464
5=V4E_SemLogError_TurnDir 38 7.224
6=V4E_SemLogError_NumberOri 11 2.091
7=V4E_SemLogError_NumberDir 43 8.175
8=V4E_NoSemLogError 315 59.886
5. Effectiveness of 1=V5E_NoDebug_ProgramError 122 23.194
debugging 2=V5E_Debugging_NotEffective 6 1.141
V5_EvalProg_Program 3=V5E_Debugging_Effective 120 22.814
Debugging 4=V5E_Debugging_NotAtAll 278 52.852
6. Strategies of debugging 1=V6E_DebStr_CardsDebug 15 2.852
V6_EvalProg_Debugging 2=V6E_DebStr_VerbalDebug 7 1.331
Patterns 3=V6E_DebStr_DifferentProgram 110 20.913
4=V6E_DebStr_None 394 74.905

ary order starting with those related to the types of errors and ending us to examine the number of attempts as a result of an ’error’ to test a
with those related in the whole debugging process. The added value of program and complete it successfully an algorithmic design. In variable
these variables is to capture a self-reflective process, which is mostly V1, children’s efforts are recorded by a scaling and the following cate-
displayed in experienced programmers (Solomon & Cambridge, 1986; gories are created: a) one (01) to two (02) times, b) three (03) to four
Sipitakiat & Nusen, 2012). This allows us to have valid data on the pro- (04) times, and c) more than four times. The underlying idea was to
cess of developing programming ability and CT. rank the categories on a scale of one (01) to four (04) attempts to clas-
sify two types of programming behaviour: a) incidents that exhibited the
4.1. The different cognitive phases of a debugging ‘journey’ programming strategies with potentially more than one attempt and b)
incidents that exhibited errors. Obviously, in each category, both types
The analysis that follows is made according to Klahr and of behaviour appear together. As illustrated in Table 1, 449 of the 526
Carver’s (1988) framework that includes different cognitive phases of children (f = 449, rf = 85.36) attempted up to two times to write and
the debugging process. execute a program to complete an algorithmic design. As this number of
children represents a significant proportion of the sample, it implies in-
4.2. Phase 1: Test and evaluate program terpretations such as that a large proportion of children: a) were led to
implementing the corresponding programming strategy (or strategies)
By observing the behaviour of all children when coding, the majority or b) identified the error relatively easy, thus were led to an effective
follow the process: 𝛼) writing a program, b) executing the program, c) debugging. On the other hand, these interpretations apply to a smaller
debugging the program, and d) retesting and evaluating the program. So proportion of children, 40 out of 526 (f=40, rf=7.6), who led to coding
we define Variable 1, which records the number of times a child coded a three to four times. The distribution shows similar behaviour for 37 out
program and inputted and executed it by the robot. Though some chil- of 526 children (f=37, rf=7.03) who show a more significant number of
dren were led to an effective program by their first attempt, using a attempts more than four times.
combination of the taught programming strategies, variable V1 enabled

145
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

4.3. Phase 2: Identify bug and types of errors oughly present the syntactic and semantic/logical error results, and the
behaviours children developed accordingly.
This phase describes the discrepancy observed between the execu-
tion of a program and the initial algorithmic design as it implies the 4.5. Syntax errors
robotic system’s performance. The child reconstructs the program’s orig-
inal design to understand and identify the exact location or errors. Pro- The syntax errors of the sequential structure of a program with the
gramming errors in computer science are divided into three major cat- tangible robot Bee-Bot are described and analysed qualitatively under
egories: a) syntax, b) semantic, and c) logical (Fay & Mayer, 1988). In the Variable V3 (V3_EvalProg_Syntax Errors). The categories of this vari-
the present study, we found errors of all three types by using the ’pseu- able were derived when the children resolved an algorithm by perform-
docode’ in conjunction with this robotic device. Therefore, we devel- ing the corresponding program and were sometimes found that the pro-
oped appropriate variables to capture and highlight the full range of gram could not be performed or that the robot ran more programs than
corresponding behaviours and build programming knowledge to create expected. These cases are classified depending on the difficulty of syn-
typologies. In particular, syntactic errors represent ’grammatical errors’ tax knowledge presented by the programming profiles of children. As
when using a programming language. They refer to errors related to a result, four categories were organised, describing the syntax error in
the syntactic structure of a programming language and thus to the pro- question, which were linked to the commands relating to the respective
cess of coding a program or writing the corresponding programming programming steps (Fig. 5).
structure. Programming structure in the present context refers to a se- The four quality categories developed by a pattern from incomplete
quential structure of commands. Within this structure, commands fol- strategy, which shows the difficulty of articulating two different syntax
low a sequential order of succession, whilst each command is executed commands (memory and program execution), to the complete building
only once. Moreover, we benefited from the Logo programming lan- of syntactic knowledge expressed by the lack of syntax errors (Table 2).
guage in this process since its representation in a ’pseudocode’ consti- The range of quality categories shows that the building of program
tuted a didactic transformation towards facilitating teaching practice composition by preschool children is directly linked to the stages of
and knowledge-building of coding in a tangible robotic device. There- the programming method for which it was developed. For example, the
fore, syntactic ’errors’ are the commands related to the grammatical programmer usually identified this type of error when transferring the
structure of the program. They occur when coding a program using program from visual representation using ’pseudocode’ into the tangi-
’pseudocode’ and are related to the function and operation of the robot, ble robot (program introduction). As we observed, when children intro-
so at a practical level are the commands: a) ’CLEAR’ and b) ’GO’, which duced to the program, they came to realise that they added a command
are related to the grammatical structure. Under this prism, the identified on the robot which was not on the initial set of commands, thus resulting
’bugs’ of these two commands were related to syntactic knowledge. in a non-successful algorithmic solution. After capturing the qualitative
However, the most complex errors in terms of cognitive difficulty characteristics of the syntactic knowledge, we proceeded to their quan-
belong to the second category and are semantic and logical errors. They titative categorisation and mapping of the data to identify the dominant
occurred when an algorithm did not resolve the problem that was de- category.
signed. In fact, for this particular programming context, using ’pseu- The Variable V3 includes the categories of syntax errors. The high-
docode’, semantic/logical errors are regarded as the control commands est frequency indicates that 468 out of 526 children (f=468, rf=88.97)
or, in other words, spatial reasoning commands. As proved further, their show complete programming behaviour and hence a well-constructed
complexity lies in children’s understanding of commands’ semantics or syntactic knowledge. In contrast, the remaining number of children dis-
their place in the syntactic sequential order. Consequently, semantic and tributed in the categories: a) missing the ’CLEAR’ command (memory),
logical errors may occur during programming and are related to the b) missing the ’GO’ command (program execution), and c) missing both
choice of commands and/or their order when fixing a programme. This ’CLEAR & GO’ commands. More specifically, we found that 25 out of
type of error is linked to semantic knowledge. Therefore, we classified 526 children (f=25, rf=4.75) missed the ’CLEAR’ command (memory),
them in one variable, even if a mistake originated from a misconcep- 20 out of 526 children (f=20, rf=3.8) missed the ’CLEAR & GO’ com-
tion of the algorithm’s semantics or logic. Children had to take control mands, and a smaller distribution of 13 children (f=13, rf=2.47) missed
of both these functions and check if they had entered a) the correct in- the ’GO’ command. Therefore, the more significant proportion of the
structions and b) in the correct order. Through this process, novice pro- sample suggests a complete programming behaviour in which syntax er-
grammers became aware of the ’state’ of an algorithm and thus sought rors are absent, so we assume that syntactic knowledge is consolidated.
to locate the error(s). However, a small proportion of children show semi-complete syntactic
knowledge, which relates either to the difficulty of fitting the location
4.4. Types of errors from novice programmers of the ’GO’ command (it was placed at the beginning of a program in-
stead of at the end). Also, the memory command ‘CLEAR’ was usually
According to the above, to identify the types of errors, we intro- not included in writing a program or was mispositioned as happened
duced the Variable V2 (V2_EvalProg_TypesOfErrors) to gather informa- with the program execution command ‘GO’.
tion from the step that children started coding a program using the ’pseu-
docode’ for its representation. Thus the corresponding categorisation of 4.6. Semantic and logical errors
types of errors was developed. We followed the quantitative analysis
approach to highlight the most general categories using the qualitative The sequential program structure’s semantic/logical errors with the
approach to capture the types of errors comprehensively. As shown in robot Bee-Bot are described and analysed qualitatively with the Vari-
Table 1, most of the children in the sample do not show any error (syn- able V4 (V4_EvalProg_SemanticLogical Errors). The categories of this
tactic or semantic/logical). However, the main types of errors (syntactic variable arose when the children resolved an algorithm by perform-
or semantic/logical) appear with a small distribution. In addition, some ing the corresponding program and found that the programmable robot
children show a combination of errors (syntactic and semantic/logical). was on a different path and, therefore, to a different solution from the
More specifically, 284 out of 526 children (f = 284, rf = 53.99) reveal previously preceding algorithmic design. These cases of children were
a fully coherent programming behaviour without errors in their pro- coded in categories depending on the semantic knowledge difficulty
grams. In addition, 184 of the 526 children (f=184, rf=34.98) demon- presented by their programming profiles. In order to highlight the ex-
strate semantic/logic errors, 36 (f=36, rf=6.84) show the combination tent of the semantic/logical errors, we organised them into eight (08)
of syntactic and semantic/logical type, and the remaining 22 children categories, characterising the semantic/logical error in question as it
(f=22, rf=4.18) show only syntax errors. In the following parts, we thor- linked to the commands related to the corresponding programming step

146
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Fig. 5. Syntax errors and sequential program-


ming structure.

Table 2
Typology of syntactic knowledge.

Qualitative characteristics of syntax errors (Variable 3)

Incomplete: V3E_SyError_CLEARGO
Error in using commands CLEAR & GOCoding behaviour: wrong position or missing
SemiComplete: V3E_SyError_CLEAR
Error in using command CLEARCoding behaviour: wrong position or missing
SemiComplete: V3E_SyError_GO
Error in using command GOCoding behaviour: wrong position or missing
Complete: V3E_NoSyError
Both control and operation commands (CLEAR & GO) included and in correct position

Fig. 6. Semantic/logical errors and sequential


programming structure.

(Fig. 6). Again, we ended to a semi-complete and complete description out errors; thus, they presented an organised programming behaviour
of children’s programming behaviour. (Table 1). The remaining number of children is divided into subcate-
Semi-complete included a range of behaviours (07 categories) trying gories. The highest distribution falls into the category where the error
to capture the difficulty of building semantic knowledge on both levels is related to the difficulty of selecting a combination of orientation and
of understanding, namely semantic and logical. In contrast, the complete direction commands, of which 70 out of 526 children belong (f=70,
strategy describes a consolidation of syntactic knowledge as expressed rf=13.00). These children might have had more absences or have been
by the absence of semantic-logical errors (Table 3). the younger ones aged 4.
As can be identified from the qualitative content of the modalities, Furthermore, 43 out of 526 children (f=43, rf=8.00) exhibit incom-
most errors are described as insufficiency of direction and orientation plete semantic knowledge (logical error) in the direction commands; in
commands, which is why we call them logical errors. Only two cases other words, they encounter fewer or more direction commands than
are characterised by semantic error and related to the misconception initially required to complete the algorithm. The following logical error
of orientation orders. However, we decided to record them in different with a high distribution, 38 out of 526 children (f=38, rf=7.0), con-
categories in light of qualitative analysis. In the following, we present cerns the coding behaviour related to missing direction command(s) af-
the data from the quantitative analysis of Variable V4 to highlight the ter an orientation command. Corresponding in distribution is the cat-
predominant categories contributing to understanding cognitive pro- egory with 34 children out of 526 (f=34, rf=6.00), which captures the
gramming processes. The majority of the sample, 315 out of 526 chil- coding behaviour regarding misunderstanding of orientation commands
dren (f=315, rf=60.00), exhibit constructed semantic knowledge with- and missing orientation commands. Therefore, that is a semantic error if

147
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Table 3
Typology of semantic knowledge.

Type of errors
Semantic (S)
Qualitative characteristics of semantic/logical errors (Variable 4) Logical (L)

SemiComplete: V4E_ SemLogError_TurnOriDir L


Error: orientation or direction command
Coding behaviour: Orientation or direction command missing
SemiComplete: V4E_SemLogError_PosNum L
Error: position & number of commands
Coding behaviour: Number of command(s) and order of commands differ from the initial verbal coding
SemiComplete: V4E_SemLogError_TurnNumDir L
Error: position & number of commands
Coding behaviour: Direction command(s) after an orientation command missing, and overall order of commands differ from the initial verbal coding
SemiComplete: V4E_SemLogError_TurnOri S/L
Error: turn orientation
Coding behaviour: Misconception of orientation command
Orientation command(s) missing
SemiComplete: V4E_ SemLogError_TurnDir L
Error: turn direction
Coding behaviour: direction command after an orientation command missing
SemiComplete: V4E_SemLogError_NumberOri S/L
Error: number of orientation commands
Coding behaviour: more or fewer orientation commands than initially required
SemiComplete: V4E_ SemLogError_NumberDir L
Error: after orientation commands number of direction commands
Coding behaviour: more or fewer direction commands than initially required
Complete: V4E_NoSemLogError
No Semantic/Logical error

children have not constructed the sides of turns and consequently orien- and its validity at this phase, so they judge the ’state’ of an algorithm
tation commands. On the other hand, it is regarded as a logical error in and the coding designed to program the robot.
cases where children cannot locate orientation commands in the algo-
rithmic structure. The following categories show slightly the same dis- 4.8. Locate and correct bug
tribution. Thus, 12 of the 526 children (f=12, rf=2.00) exhibit a coding
behaviour with semantic knowledge exhibiting a logical error, which is Mapping the cognitive process in Phases 4 and 5, we created Variable
found in omissions of directional instructions after turn instructions and V5 (V5_EvalProg_ProgramDebugging). Its subcategories are assigned
overall on the set of direction commands in a program. In addition, the when a child is led to identifying and debugging errors (syntax, seman-
category with incorrect semantic knowledge (logical errors) recorded tic/logical, or both) after executing a program or debugging the code.
11 out of 526 children (f=11, rf=2.00), which emerged from the diffi- Mainly, it involves four subcategories that capture children’s behaviour
culty of selecting an appropriate number of orientation commands. If and cognitive process when they locate and correct one or more er-
this category of errors is seen from a semantic prism, we can explain rors. Children exhibit incomplete, semi-complete and complete cogni-
this coding behaviour as a misunderstanding of orientation concepts, tive processes in the range of categories that emerged. A category was
confusing selecting the appropriate number of commands. Finally, the not involved in this cognitive process due to the original coding’s appro-
category containing only three children is considered with a nominal priateness. The following categories created and assigned to capture the
recorded frequency and no interest. final stage, the result of a debugging process, are qualitatively presented
in the following table (Table 4).
Most of the children, 278 out of 526 (f=278, rf=52.85), presented
4.7. Phase 3–5: Represent program, locate bug and correct bug complete and functional programs, so they did not debug. In the same
category fall those children who exhibited the ‘step-by-step’ program-
In these three phases, the user should represent the program to inves- ming strategy as when in this strategy they didn’t come up with errors.
tigate the possible location of an error, in other words, to detect the error Then, 122 out of 526 children (f=122, rf=23.19) belong to the category
and then take action to correct it. This process requires changing and where their programs show errors without corresponding debugging.
replacing one or more errors and retesting the program. Consequently, In contrast, 120 out of 526 children (f=120, rf=22.81) identify and
a programmer should detect, decode and verify if one or more errors oc- locate the error and consequently successfully correct it, so in this
cur. Our study considers all three phases described by two Variables, V5 case, it is called effective debugging. However, out of the whole set
and V6. Phase 3 regards the representation of a program, so in our study, of children who proceeded to debug (126/526), only 6 children (f=6,
this phase is met by the program’s visual representation with the ’pseu- rf=1.14) ultimately failed to modify the inaccurate program while at-
docode’. The teaching sequences of educational scenarios ensured that tempting its correction. So they do not complete the process and would
all children would visualise and reflect their verbal algorithmic designs have to continue again by going through Phases 3–5 of the debugging
by ’pseudocode’. To that end, each child could easily recognise when an model.
algorithmic design was unsuccessful due to error(s) by matching each The Variable V6 (V6_ EvalProg_Debugging Strategies) is the most im-
robot’s movement with each command card. Hence, they were led to portant one as it is assigned to children to capture the specific strategy
detect and locate a bug as suggested in Phase 4, the most demanding they developed when they debugged. In particular, it includes four (04)
cognitive process in debugging. At this point, the child must have iden- subcategories that record children’s cognitive process to correct an er-
tified the location of the error and selected the appropriate command ror. The range of categories indicates that children primarily develop
to correct the program effectively and successfully solve the algorithm, debugging strategies of experienced programmers, such as when cre-
Phase 5. Children initially process the solution on an algorithmic design ating a new program. However, they also develop other strategies on

148
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Table 4
Typology of debugging effectiveness.

Qualitative characteristics of debugging effectiveness (Variable 5)

Incomplete: V5E_NoDebug_ProgramError
Error in coding. The child does not identify it, and thus no debugging occurs.
Semi-complete: V5E_Debugging_NotEffective
Unsuccessful debugging: the algorithmic design does not resolve the problem.
Complete: V5E_Debugging_EffectiveSuccessful debugging: the algorithmic design resolves the problem.
No debugging: V5E_Debugging_NotAtAll
Successful coding; the algorithmic design resolves the problem. Also, it is applied for children who used the programming strategy ‘step-by-step’.

Table 5
Typology of debugging strategies.

Qualitative characteristics of debugging strategies (Variable 6)

Incomplete: V6E_DebStr_CardsDebug
Assigned when a child verbally developed the algorithmic design, but when writing the program by ’pseudocode’, realised that it eventually needs to be differentiated
from the verbalisation and proceeded to recode.
Semi-complete: V6E_DebStr_VerbalDebug
Assigned when a child verbally articulated the debugging but did not code and execute a new program to correct the error and retest it.
Complete: V6E_DebStr_DifferentProgram
Assigned when a child coded a new program to correct an error
No debugging: V6E_DebStr_None
There was no error in the coding and execution of the program, and even if there was an error, the subject did not identify it or locate the error to debug.

which no literature is provided to report. What is essential for all cat- bly, tangible programming facilitated and supported this process. In this
egories is that the specific cognitive processes consist of elaborations programming context, we applied Klahr’s framework for assessing de-
which in turn lead to the construction of procedural knowledge, i.e., bugging abilities and describing strategy aspects to provide indicators
how a programmer can complete a debugging process (Komis, 2005). of stages of the debugging process matched to preschoolers’ develop-
The qualitative characteristics of the variable are contained in the table ment of CT. The types of errors are categorised in syntactic and seman-
(Table 5). tic knowledge, and the individual strategies children develop when they
As seen from the quantitative data of variable V6, two are the most recognise the existence of an error are highlighted. We have shown how
prevalent of the four categories under study. In particular, the category ‘step-by-step’ control allowed children to better articulate and pinpoint
of children of programmers with completed programming profiles in- errors in the program, as suggested by Sipitakiat and Nusen (2012). We
cludes more than two-thirds of the sample (394 out of 526 children, designed the ‘step-by-step’ function to allow children to slow down the
f = 394, rf = 74.9); consequently, this number of children is not driven execution process and have more time to observe and compare the code
by debugging procedures. The next category with a significant frequency simultaneously during the robot’s execution. Without this function, chil-
consists of 110 out of 526 children (f=110, rf=20.91) and involves im- dren had to choose whether to pay attention to the floor robot or the
plementing the new program strategy to complete debugging. Subse- ‘pseudocode’ set in a program. The former was usually the case, which
quently, the remaining two categories show minimal distributions and often made it difficult, especially for children aged 4–5, to associate the
thus do not constitute significant findings. Thus, 15 out of 526 children observed problem with the bug in the program. On the other hand, when
(f=15, rf=2.85) proceed only to verbal debugging and 7 out of 526 chil- using the ‘step-by-step’ function, students pushed the execution button
dren (f=7, rf=1,33) proceed to use the ‘pseudocode’. to run one command at a time, so they minimised error risk-taking and
articulated the algorithmic design to resolve a problem at their own
5. Discussion pace.

This study provides empirical evidence from a large-scale study that 5.1. Identification of an error and reflective process
teaching preschool students (ages 4–6) to code through the tangible
robot Bee-Bot can accelerate the acquisition of one of CT’s practices, Firstly, regarding the research question about the process children
such as debugging. The current study uses developmentally appropri- follow to identify the existence of error and locate it, we gather evidence
ate educational intervention and is the first to document and report on from Variable 1. As shown, most children debugged the same program
typologies of debugging behaviours linked to syntactic and semantic more than once but up to two times. The same evidence is shared by
knowledge. Our research demonstrates the debugging behaviours young Gould and Drongowski (1974), so we can suggest that it is common for
preliterate children exhibit while processing programming without hav- this age group to attempt two tries for debugging.
ing prior experience but trying to understand if a program is fixable
(Papert, 1993, p. 23) when programming the tangible robot Bee-Bot and 5.2. Types of errors
what processes are emerging in order to identify, locate and correct er-
rors. The children were not aware of what steps a debugging process con- Regarding the research question for the types of errors that emerged
sists of, nor were they taught so. Thus we may suggest that children were when coding it, an interesting finding is the presence of the two main
led to debug intuitively and as a result of their logical process to make types of errors: i) syntax and ii) semantic/logical. The former type
their program fixable to fulfil their algorithm and solve the problem. records most children exhibiting a complete programming behaviour, in
Although Cuneo (1986) suggested otherwise in a graphic-programming which syntactic errors are absent. However, a small proportion of chil-
environment, in our study, the children managed to debug their incor- dren exhibit semi-complete syntactic knowledge; possibly this relates
rect programs even though they were not meant to do so, and those pro- to difficulty building the memory command’s position and perhaps its
grams were more complex than three commands. However, most proba- function. It would be helpful for future reference to relate this finding

149
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

to the relative age group and cognitive development. The latter type of opment of syntactic knowledge in a graphical programming environ-
error is the predominant category of semantic/logical errors belonging ment such as Scratch using our scenarios for teaching and compare re-
to the high-level cognitive process. It found that this type was likely sults to the PTD curriculum and children’s lower scores in debugging
more complex for children to build. The qualitative data analysis shows (Pugnali et al., 2017), as in both teaching frameworks the sole focus was
that the semantic errors relate to a misunderstanding of the location or not on debugging. Furthermore, this study shed light to the importance
numbering of the commands. of teachers’ well-designed professional development that needs to focus
In contrast, the logical errors relate to the missing programmable explicitly on how to use tangible robots and develop programming skills
robot’s control commands. The number of errors in a program ranges to promote computational thinking whilst providing a developmentally
from one (1) to three (3); for debugging them, novice programmers end appropriate pedagogical design on planning to teach as suggested by
up with appropriate strategies. Each child appeared to identify one er- Chalmers (2018).
ror for each program debugging attempt. Most children show to form
semantic knowledge, so we attribute them a well-organised and consol-
6. Limitations and future research directions
idated programming behaviour.
The number of children presented with the highest semantic/logical
This study was conducted in public schools during their regular
errors distribution was divided into the categories’ Semantic/Logical Er-
school timetable. We were aware that no matter how dedicated teach-
ror_TurnOriDir’ and ’Semantic/Logical Error_NumDir’. This number of
ers might be and successfully teach these contents in their classroom,
children had difficulty selecting a combination of two (2) commands
they had different teaching styles and approaches, although we had
(orientation and direction) in structural parts of a program. Evidence
tested all scenarios before delivering the professional development in
suggests that children tended to delete more commands than needed or
robotics education. Thus, as with any study in a school setting, we ac-
even compose a new program to fix the error (Misirli & Komis, 2020).
knowledge that this study faced several environmental limitations. Even
This evidence suggests the error was either made by their difficulty in
though each participating teacher taught the same scenario, controlling
spatial reasoning or an error in counting (Shumway et al., 2021), even
all factors from different school settings was not feasible; this may have
though the scenarios proved effective in constructing spatial reasoning
influenced the results. For example, absences for each child were not
(Misirli et al., 2019). Thus, this semantic/logical error type seems to be
recorded, which might be one factor that influenced some children to
the most difficult for novice programmers.
develop segmented programming skills. In addition, while some teach-
Additionally, we have not observed ’"initialisation bugs" taking place
ers followed a more constructivist approach in their teaching, as was
in the physical, tangible domain where they represent failures to re-set
implied in the scenarios, others more likely followed a teacher-led mode
the robot itself on the proper trajectory’ (Silvis et al., 2021). That would
of implementation.
be causing semantic/logical errors as the scenario-based pedagogical de-
Furthermore, another element to examine is follow-up cases, even
sign had predicted this kind of device complexity and anticipated ade-
though a small number occur, and discuss their programming profiles
quate activities. Overall, bugs in the programs most often involved turn
towards the evolution of syntactic and semantic knowledge. That would
errors and missing codes (Silvis et al., 2021), but not those involved
probably provide great insight into debugging or other aspects of CT that
controller errors.
might emerge along with differences between novices and experienced
programmers. Also, investigating pair debugging (McCauley et al.,
5.3. Strategies of debugging and types of knowledge
2008) would be another possibility to expand on the findings of this
study in regards to debugging strategies and their evolution. Further
Furthermore, regarding the strategies developed by novice program-
analysis would examine a correlation between gender and the two age
mers to debug effectively, children having already represented a pro-
groups to highlight CT skills and abilities that emerge even though they
gram on ‘pseudocode’ might have been helped to move to the debug-
were not initially planned. Ideally, a focus group of teachers who par-
ging process without an effective result. As Ahn et al. (2021) suggest,
ticipated in this professional development would provide more insight
the learning experience in varied forms (plugged and unplugged) that
into their confidence and subject knowledge in CT education with young
lead to experiential and meaningful learning can increase accessibility
children (Chalmers, 2018; Wang et al., 2021).
to programming education.
Despite the limitations of the study described in this paper, post-
As shown in our study, half of the children implemented the debug-
study evaluation data collected from the teachers show the success of
ging strategy by composing a ’new program’ to complete their debug-
robotics in education. All the teachers mentioned they would participate
ging process. A ’new program’ is a twofold process of identifying and
in this project again. Furthermore, most of them purchased robots for
correcting an error in the original code or writing a new code by scratch
their classrooms or borrowed them from us to continue teaching pro-
to accomplish their algorithmic design.
gramming after the end of the European project. Nevertheless, it was
From the above, we conclude that in Phase 3–5, the preschool chil-
not only teachers supporting this positive feedback; post-study eval-
dren have already represented a program since it was a prerequisite of
uation data from children drew attention to the success and enthusi-
the teaching intervention, moving some of them to the debugging pro-
asm reported and asked for exploring more modalities on the robot
cess without being led to an effective process. As it is shown, half of the
like the pause button. Therefore, this feedback highlights this process’s
children in the sample design and compose successful programs in solv-
widespread pedagogically adequate and educational nature.
ing an algorithm. However, a significant number still present difficulties
in syntactic and semantic knowledge, manage to identify some of the
errors they present (syntax and semantic/logical) and are led to appro- 7. Conclusion
priate correction and modification of the program. Nevertheless, a sig-
nificant number of children continue to show syntactic knowledge dif- As Bers writes in Coding as a Playground:
ficulties. It might be the programming environment of tangible robotics ’Once children understand how to debug their systems, they start to de-
that may not be adequate for consolidating syntactic knowledge. How- velop common troubleshooting strategies that can be used on a variety of
ever, the graphical programming such as Scratch, prevents syntax errors computing systems. Learning how to debug is an important skill similar to
as students can drag and drop the code to build programs without mak- checking your work in math or editing in literacy. It teaches the powerful les-
ing syntax errors (Resnick et al., 2009; Maloney et al., 2010; Qian & son that things do not just happen to work on the first try, but many iterations
Lehman, 2017), and also is reported that can help novices build a bet- are usually necessary to get it right’ (Bers, 2018a, p.77).
ter understanding of some programming concepts (Weintrop & Wilen- In the present work, we approached debugging as a CT practice stud-
sky, 2018; Chou, 2019). It would be interesting to examine the devel- ied through programming in a tangible robotic context as emerged from

150
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

the programming activity of young children. Klahr & Craver’s frame-


work was applied to analyse young children’s programming behaviour
and identify the development of syntactic & semantic/logical knowl-
edge through common errors and how they debug them. Furthermore,
we organised the debugging skills in typologies for each type of error
and for the overall debugging process. The use of ‘pseudocode’ clearly
worked out as a medium visually representing a program, thus facilitat-
ing novice programmers to take a step back and reflect on their program-
ming. The children using the ‘pseudocode’ gradually developed coding
skills starting from the basic ‘step-by-step’ strategy and uplifting to the
‘automated’ one. The findings corresponded to the conceptual constructs
of high-level thinking skills in debugging. At the same time, the num-
ber of case studies provided a more reliable context of analysis and not
low-level abilities as implied by Xu and Rajlich (2004).
In this study, we have shown that debugging as a core practice of
CT may emerge in preschool children when controlling a tangible robot
without using the ‘step-by-step’ strategy or particular design aspects to
teach debugging, as it was suggested by Nusen and Sipitakiat (2011) and
Sipitakiat and Nusen (2012). In addition, it is shown that the ability of
novices to recognise and separate their selves from coding and machine
is a common problem for novices in programming (Pea, 1986). Fur-
ther work needs to be done on various tangible robotic tools and the
challenges and opportunities these provide (Wing, 2008). Though the
findings of this study come from a robotics context, still could provide
researchers and educators with explicit information of children’s debug-
ging process to support their teaching and can be applied to other tan-
gible robots or block-based programming environments (Bers, 2018b;
Kim et al., 2018; Silvis et al., 2021). Furthermore, it will help us to ex-
pand and improve our analysis and conceptual programming model of
scenario-based design (Misirli & Komis, 2014; Komis et al. 2017).
Considering debugging is one of the of the CT aspects, the benefits of
this line of research are likely to be included in investigating how teach-
ers debug the code programmed by others (e.g., students) and help them
debug and provide knowledge towards cultivating a process-orientated
teaching design than one based only on outcomes of successful or un-
successful programming (Chalmers, 2018; Kim et al., 2018; Relkin &
Bers, 2021; Silvis et al., 2021). Above all, institutional commitment to
teaching coding and CT to young children is strongly recommended, be-
sides integration into early childhood curricula and policies (Relkin &
Bers, 2021).
Duration:

Formatting of funding sources


Assessment of prior/post mental representations & subject-knowledge.

The publication fees of this manuscript have been financed by the


Research Council of the University of Patras.
7. Backwards arrow buttin: What do you think will happen if you press it?
6. Forward arrow button: What do you think will happen if you press it?
If the answer is ‘No’ continue with question: What they have different?

Data availability
9. Right arrow button: What do you think will happen if you press it?
8. Left arrow buttin: What do you think will happen if you press it?

11. Button ‘CLEAR’: What do you think will happen if you press it?

Data will be made available on request.


10. Button ‘GO’: What do you think will happen if you press it?

CRediT authorship contribution statement

Anastasia Misirli: Conceptualization, Methodology, Validation, In-


Individual interview (pre & post- test).

5. What you think you can do with buttons?

vestigation, Resources, Data curation, Project administration, Formal


2. What do you think the Bee-Bot does?

analysis, Writing – original draft, Writing – review & editing. Vassilis


3. How does the Bee-Bot fly/move…?
1. What do you think the Bee-Bot is?

Komis: Conceptualization, Methodology, Formal analysis, Project ad-


12. What do you think is a robot?

ministration, Funding acquisition, Supervision, Writing – original draft.


4. Are all buttons the same?

Acknowledgements
Name of child / Age

The authors would like to thank "The Fibonacci Project - Disseminat-


ing Inquiry-Based Science and Mathematics Education in Europe" – FP7.
Appendix 1

They also acknowledge the willingness of participating partner schools


Date:

(children and educators) to undertake Robotics and Programming activ-


ities under this European project.

151
A. Misirli and V. Komis
Appendix 2

Description of subject-knowledge evaluation activity as included in Scenario 1: Spatial reasoning.


Individual interview (post-test).

Scenario 1: Spatial reasoning


Playful learning context: The class with their friend Bee-Bot visit the local road safety park

For the evaluation activity, the narration was a new context to reinforce the generalisation of
152

knowledge. Here the class visits the neighbourhood’s playground; therefore, the images match this
context.
The educator asks each child to choose a toy and move the Bee-Bot (BB) towards it. However, for
each toy, there is a special arrangement (teaching agreement):
i) The BB should pass from the swing to get to the seesaw
ii) The BB should arrive on the top side and pass from the seesaw to get to the swing
iii) The BB should pass from the swing to get to the slide
Note: the BB should arrive at each toy and look at it.

Early Childhood Research Quarterly 65 (2023) 139–158


A. Misirli and V. Komis
Assessment instrument of subject-knowledge evaluation activity for Scenario 1.

Name:
Age:
1. You can circle or underline the child’s toy choice on
the grid
2. In the next column
record the verbalisation
• FORWARD
of algorithm starting
• LEFT
with the number 1 for
• RIGHT
the first command and
• BACKWARDS
continuing onwards..
• CLEAR
• GO

3. In the next column,


record the program’s
• CLEAR
153

syntax starting with the


• ↑
number 1 for the first
• ↓
command and
• ←
continuing onwards.
• →
• GO

Notes:

Early Childhood Research Quarterly 65 (2023) 139–158


A. Misirli and V. Komis
Appendix 3

Description of subject-knowledge evaluation activity as included in Scenario 2: Measurement.


Individual interview (post-test).

Scenario 2: Measurement
Playful learning context: The class with their friend the Bee-Bot visit the local Fun-Fair
154

For the evaluation activity, the narration was a new context to reinforce the generalisation of
knowledge. Here, the class visits the local museum of zoology. Therefore numbers 1, 2 & 3 represent
the museum’s animal images.
The educator asks each child the following questions:
i) Which is the shortest path to the Bee-Bot? (the answer is valid only if counting justification is
given)
ii) Which animal do you find on this path?
iii) How can the Bee-Bot get to the animal using the commands cards?
Note: the BB should arrive at each animal and look at it.

Early Childhood Research Quarterly 65 (2023) 139–158


A. Misirli and V. Komis
Assessment instrument of subject-knowledge evaluation activity for Scenario 2.

Name: Animal 1 Animal 2 Animal 3 ↓ ↗ ‘GO’ ‘CLEAR’


Age: ↑ ↖
Which is the shortest path to the Exemplar answer: ‘the shortest
Bee-Bot? (the answer is valid only path is the one which has 4
if counting justification is given) squares, cause the other paths
have more’
Which animal do you find on this Note ‘X’ to the animal X
path?
How can the Bee-Bot get to the Record each programme starting 2, 3, 4 7 1
animal using the commands with the number 1 for the first 5, 6
cards? command and continuing
onwards.
155

Here is an example of how it


should look like when a
programme is filled in.

Early Childhood Research Quarterly 65 (2023) 139–158


A. Misirli and V. Komis
Appendix 4

Description of subject-knowledge evaluation activity as included in Scenario 3: Memory/iteration.


Individual interview (post-test).

Scenario 3: Memory/iteration
Playful learning context: The class asks the Bee-Bot to accomplish different tasks

For the evaluation activity, the narration was a new context to reinforce the generalisation of knowledge (knowledge
transfer). The class asks the BB to get to the bookcase or any other context they may encounter. The teaching contract is to
determine how this can happen using the iteration structure solely.
The educator asks each child the following question:
i) How can the Bee-Bot reach the bookcase using the repetition command?
Note: the BB should arrive at the top and look forward to the bookcase or any other object/image. More importantly, each
child needs to answer what they should do to make the BB forget their program before they hand it in to the next child.
156

Assessment instrument of subject-knowledge evaluation activity for Scenario 3.

Name:
Age:
1. In the next column FORWARD
record the verbalisation LEFT
of algorithm starting RIGHT
with the number 1 for BACKWARDS
the first command and CLEAR
continuing onwards. GO
2. In the next column,
record the program’s
• CLEAR
syntax starting with the
• ↑

Early Childhood Research Quarterly 65 (2023) 139–158


number 1 for the first
• ↓
command and
• ←
continuing onwards.
• →
• GO

Notes:
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

References Grover, S., & Pea, R. (2013). Computational Thinking in K–12: A Review of the State of
the Field. Educational Researcher, 42(1), 38–43. 10.3102/0013189X12463051.
Ahn, J, Sung, W., & Black, B. J. (2021). Unplugged debugging activities for develop- Guzdial, M. (2008). Paving the way for computational thinking. Communications of the
ing young learners’ debugging skills. Journal of Research in Childhood Education. ACM, 51(8), 25–27. 10.1145/1378704.1378713.
10.1080/02568543.2021.1981503. Heikkilä, M., & Mannila, L. (2018). Debugging in programming as a multimodal practice
Albrecht, E., & Grabowski, J. (2020). Sometimes It’s Just Sloppiness – Studying Students’ in early childhood education settings. Multimodal Technologies and Interaction, 2(42).
Programming Errors and Misconceptions. In Proceedings of the 51st ACM Technical Hoc, J. M., Green, T. R. G., Samurcay, R., & Gilmore, D. J. (1990). Psychology of program-
Symposium on Computer Science Education (Portland, USA) (SIGCSE ’20) (pp. 340–345). ming. Academic Press ISBN 0-12-350772-3.
New York, NY: Association for Computing Machinery. IEEE (1994). IEEE standard classification for software anomalies. IEEE Computer Society.
Allan, V., Barr, V., Brylow, D., & Hambrusch, S. (2010). Computational thinking in Johnson, F., McQuistin, S., & O’Donnell, J. (2020). Analysis of student misconceptions
high school courses. In Proceedings of the 41st ACM technical symposium on Com- using python as an introductory programming language. In Proceedings of the 4th Con-
puter science education, SIGCSE 2010, Milwaukee, Wisconsin, USA March 10-13. ference on Computing Education Practice 2020, Durham, UK, 09 Jan 2020 (p. 4). ISBN
10.1145/1734263.1734395. 9781450377294. 10.1145/3372356.3372360.
Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in edu- Kelly, A. E., Lesh, R. A., & Baek, J. Y. (2008). Handbook of design research methods in
cation research? Educational Researcher, 41(1), 16–25. 10.3102/0013189X11428813. education: Innovations in science, technology, engineering, and mathematics learning and
Angeli, C., & Valanides, N. (2020). Developing young children’s computational thinking teaching. New York: Routledge.
with educational robotics: An interaction effect between gender and scaffolding strat- Kim, C., Yuan, J., Vasconcelos, L., Shin, M., & Hill, R. B. (2018). Debugging during block-
egy. Computers in Human Behavior, 105. 10.1016/j.chb.2019.03.018. -based programming. Instructional Science, 46(5), 767–787.
Angeli, C., Voogt, J., Fluck, A., Webb, M., Cox, M., Malyn-Smith, J., & Zagami, J. (2016). Klahr, D., & Mc Coy Carver, S. (1988). Cognitive objectives in a LOGO debug-
A K6 computational thinking curriculum framework: Implications for teacher knowl- ging curriculum: Instruction, Learning and Tranfer. Cognitive Psychology, 20, 362–
edge. Journal of Educational Technology and Society, 19(3), 47–57. 404.
Arfé, B., Vardanega, T., Montuori, C., & Lavanga, M. (2019). Coding in primary grades Komis, V. (2005). Introduction to Teaching Informatics. Klidarithmos: Athens.
boosts children’s executive functions. Frontiers in Psychology, 10, 2713. 10.3389/fp- Komis, V., & Misirli, A. (2015). Apprendre à programmer à l’école maternelle à l’aide de
syg.2019.02713. jouets programmables. In G.-L. Baron, É. Bruillard, & B. Drot-Delange (Eds.), Infor-
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is In- matique en éducation: Perspectives curriculaires et didactiques (pp. 209–226). Clermon-
volved and what is the role of the computer science education community? ACM t-Ferrand: Presses Universitaires Blaise-Pascal.
Inroads, 2(1), 48–54. 10.1145/1929887.1929905. Komis, V., Romero, M., & Misirli, A. (2017). A Scenario-Based Approach for Design-
Bers, M. U. (2018a). Coding and computational thinking in early childhood: The ing Educational Robotics Activities for Co-creative Problem Solving. In Alimisis, &
impact of ScratchJr in Europe. European Journal of STEM Education, 3(3), 08. M. Moro (Eds.), Educational Robotics in the Makers Era, Advances in Intelligent Systems
10.20897/ejsteme/3868. and Computing: vol. 560 (pp. 158–169). Springer International Publishing AG 2017D.
Bers, M. U. (2018b). Coding as a playground: Programming and computational thinking in the 10.1007/978-3-319-55553-9_12.
early childhood classroom. New York, NY: Routledge Press. Kong, S.C. (2019). Components and methods of evaluating computational thinking for
Bers, M. U., Flannery, L. P., Kazakoff, E. R., & Sullivan, A. (2014). Computational thinking fostering creative problem-solvers in senior primary school education. Computational
and tinkering: Exploration of an early childhood robotics curriculum. Computers & thinking education, 119–141. 10.1007/978-981-13-6528-7_8.
Education, 72, 145–157. Kurland, M. D., Pea, R. D., Clement, C., & Mawby, R. (1989). A study of the development
Bers, M. U., Strawhacker, A., & Sullivan, A. (2022). The state of the field of computational of programming ability and thinking skills in High School students. In E. Soloway, &
thinking in early childhood education". OECD education working papers, no. 274. C. J. Spohrer (Eds.), Studying the novice programmer. New Jersey: Lawrence Erlbaum
Paris: OECD Publishing. 10.1787/3354387a-en. Associates.
Brennan, K., & Resnick, M. (2012). Using artifact-based interviews to study the develop- Lee, I, Martin, F, Denner, J., Coulter, B, Allan, W, Erickson, J, Malyn-Smith, J., &
ment of computational thinking in interactive media design. Paper presented at annual Werner, L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1),
American Educational Research Association meeting, Vancouver, BC, Canada. 32–37. 10.1145/1929887.1929902.
Brusilovsky, P. (1993). Program visualization as a debugging tool for novices. In S. Ash- Lee, J., & Junoh, J. (2019). Implementing unplugged coding activities in early childhood.
lund, K. Mullet, A. Henderson, E. Hollnagel, & T. White (Eds.), Proceedings of INTER- Early Childhood Education Journal 47(3). 10.1007/s10643-019-00967-z.
ACT ’93 and CHI ’93 conference Companion on human factors in computing systems Luxton-Reilly, A., McMillan, E., Stevenson, E., Tempero, E., & Denny, P. (2018). Ladebug:
(pp. 29–30). New York: ACM Press. An online tool to help novice programmers improve their debugging skills. In Pro-
Carver, S. M., & Klahr, D. (1986). Assessing children’s LOGO debugging skills with a formal ceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer
model. The Journal of Educational Computing Research, 2, 487–525. Science Education July 2018 (pp. 159–164). 10.1145/3.
Chalmers, C. (2018). Robotics and computational thinking in primary school. International Lye, S. Y., & Koh, J. H. (2014). Review on teaching and learning of computational think-
Journal of Child Computer Interaction, 17, 93–100. 10.1016/j.ijcci.2018.06.005. ing through programming: What is next for K-12? Computers in Human Behavior, 41,
Chou, P. (2019). Using ScratchJr to Foster Young children’s computational thinking com- 51–61.
petence: A case study in a third-grade computer class. Journal of Educational Computing Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch
Research, 0(0), 1–26. 10.1177/0735633119872908. programming language and environment. ACM Transactions on Computing Education
Creswell, W. J. (2009). Research design: Qualitative, quantitative, and mixed methods ap- 10(4), Article 16. 10.1145/1868358.1868363.
proaches. Thousand Oaks, CA: SAGE Publications. Mayer, R. E. (1988). Teaching and learning computer programming. multiple research perspec-
Cuneo, D. O. (1986). Young Children and Turtle Graphics Programming: Generating and tives (1st ed.). Routledge.
Debugging Simple Turtle Programs. Annual meeting of the american educational research McCauley, R., Fitzgerald, F., Lewandowski, G., Murphy, L., Simon, B., Thomas, L., & Zan-
association. der, C. (2008). Debugging: A review of the literature from an educational perspective.
De Paula, B. H., Burn, A., Noss, R., & Valente, J. A. (2018). Playing Beowulf: Bridging com- Computer Science Education, 18(2), 67–92.
putational thinking, arts and literature through game-making. International Journal of Metzger, R. (2004). Debugging by thinking: A multidisciplinary approach. Burlington, MA:
Child-Computer Interaction, 16, 39–46. Elsevier Digital Press.
Emerson, A., Smith, A., Rodriguez, F. J., Wiebe, E. N., Mott, B. W., Boyer, K. E., & Misirli, A., & Komis, V. (2014). Robotics and programming concepts in early childhood
Lester, J. C. (2020). Cluster-Based Analysis of Novice Coding Misconceptions in Block- education: A conceptual framework for designing educational scenarios. In C. Kara-
Based Programming. In Proceedings of the 51st ACM Technical Symposium on Computer giannidis, P. Politis, & I. Karasavvidis (Eds.), Research on e-Learning and ict in education.
Science Education (Portland, USA) (SIGCSE ’20) (pp. 825–831). New York, NY: Asso- Springer. 10.1007/978-1-4614-6501-0_8.
ciation for Computing Machinery. 10.1145/3328778.3366924. Misirli, A, & Komis, V. (2020). Emerged debugging abilities in early childhood
Fay, A. L., & Mayer, R. E. (1988). Learning LOGO: A cognitive analysis. In R. E. Mayer education. In Proceedings of Constructionism 2020 (pp. 92–95). http://www.
(Ed.), Teaching and learning computer programming: Multiple research perspectives. constructionismconf.org/wp-content/uploads/2020/05/C2020-Proceedings.pdf.
Lawrence Erlbaum Associates Inc. Misirli, A., Komis, V., & Ravanis, K. (2019). The construction of spatial awareness in early
Fessakis, G., Gouli, E., & Mavroudi, E. (2013). Problem solving by 5–6 years old kinder- childhood: The effect of an educational scenario-based programming environment.
garten children in a computer programming environment: A case study. Computers & Review of Science, Mathematics and ICT Education, 13, 111–124.
Education, 63, 87–97. 10.1016/j.compedu.2012.11.016. Mukasheva, M., & Omirzakova, A. (2021). Computational thinking assessment at
Fields, D. A., Searle, K. A., & Kafai, Y. B. (2016). Deconstruction kits for learning: Students’ primary school in the context of learning programming. World Journal on
collaborative debugging of electronic textile designs. In P. Blikstein, M. Berland, & Educational Technology: Current Issues, 13(3), 336–353. 10.18844/wjet.v13i3.
A. Fields (Eds.), Proceedings of the 6th Annual Conference on Creativity and Fabrication 5918.
in Education (pp. 82–85). New York, NY: ACM. 10.1145/3003397.3003410. Murphy, L., Lewandowski, G., McCauley, R., Simon, B., Thomas, L., & Zander, C. (2008).
Fincher, A. S., & Robins, V. A. (2019). The cambridge handbook of computing education Debugging: The good, the bad, and the quirky-a qualitative analysis of novices’ strate-
research. Cambridge University Press. gies. ACM SIGCSE Bulletin, 40(1), 163–167.
Flannery, L. P., & Bers, M. U. (2013). Let’s Dance the “Robot Hokey-Pokey!”: Children’s Newhouse, C.P. , Cooper, M., & Cordery, Z. (2017). Programmable toys and free play in
programming approaches and achievement throughout early cognitive development. early childhood classrooms. Australian Educational Computing, 32.
Journal of Research on Technology in Education, 46(1), 81–101. Nusen, N., Sipitakiat, A., et al., (2011). Proceedings of the 19th International Conference
Gomes, T. C. S., Falcão, T. P., & Tedesco, P. C. D. A. R. (2018). Exploring an approach based on Computers in Education. Chiang Mai. In T. Hirashima (Ed.). Thailand: Asia-Pacific
on digital games for teaching programming concepts to young children. In Proceedings Society for Computers in Education.
of the 17th Brazilian Symposium on Human Factors in Computing Systems. Porto Alegre: Papert, S. (1993). Mindstorms: Children, computers and powerful ideas (2nd ed.). New York:
SBC. 10.5753/ihc.2018.4227. Basic Books.
Gould, J. D., & Drongowski, P. (1974). An exploratory study of computer program debug- Pea, R. D. (1986). Language-Independent Conceptual Bugs in Program Understanding.
ging. Human Factors, 16(3), 258–277. 10.1177/001872087401600308. Journal of Educational Computing Research, 2(1).

157
A. Misirli and V. Komis Early Childhood Research Quarterly 65 (2023) 139–158

Pugnali, A., Sullivan, A., & Bers, M. U. (2017). The Impact of User Interface on Young Strawhacker, A., Lee, M., & Bers, M. U. (2018). Teaching tools, teachers’ rules: Explor-
Children’s Computational Thinking. Journal of Information Technology Education: In- ing the impact of teaching styles on young children’s programming knowledge in
novations in Practice, 16, 171–193. ScratchJr. International Journal of Technology and Design Education, 28, 347–376.
Qian, Y., & Lehman, J. (2017). Students’ misconceptions and other difficulties in intro- Sullivan, A., & Bers, M. U. (2013). Gender differences in kindergarteners’ robotics and
ductory programming: A literature review. ACM Transactions on Computing Education programming achievement. International Journal of Technology and Design Education,
18(1), Article 1. 10.1145/3077618. 23(3), 691–702.
Relkin, E., de Ruiter, L.E. , & Bers, M.U. (2021). Learning to code and the acquisition of Swidan, A., Hermans, F., & Smit, M. (2018). Programming Misconceptions for School Stu-
computational thinking by young children. Computers & Education, 169, 104222. ISSN dents. In Proceedings of the 2018 ACM Conference on International Computing Education
0360-1315. 10.1016/j.compedu.2021.104222. Research, August 2018 (pp. 151–159). 10.1145/3230977.3230995.
Relkin, E., & Bers, M. U. (2021). Factors influencing learning of computa- The Design-Based Research Collective. (2003). Design-based research: An emerging
tional thinking skills in young children. Virtual Annual Meeting of the Amer- paradigm for educational inquiry. Educational Researcher, 32(1), 5–8.
ican Educational Research Association (AERA) https://cpb-usw2.wpmucdn.com/ Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press.
sites.bc.edu/dist/c/183/files/2021/04/aera21_proceeding_1687044.pdf (accessed 28 Wang, C., Choi, Y., Benson, K., Eggleston, C., & Weber, D. (2021). Teacher’s role in foster-
June 2021). ing preschoolers’ computational thinking: An exploratory case study. Early Education
Resnick, M., Maloney, J., Monroy-Hernandez, A., Rusk, N., Eastmond, E., Brennan, K., and Development, 32(1), 26–48.
Millner, A., Rosenbaum, E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Pro- Wang, F., & Hannafin, M. J. (2004). Using Design-based Research in Design and Research
gramming for all. Communications of the ACM, 52(11), 60–67. of Technology- Enhanced Learning Environments. Educational Technology Research and
Rich, K. M., Strickland, C., Binkowski, T. A., & Franklin, D. (2019). A K-8 debugging Development, 53(4), 1042–1629.
learning trajectory derived from research literature. In Proceedings of the 50th ACM Weintrop, D., & Wilensky, U. (2018). How block-based, text-based, and hybrid block/text
Technical Symposium on Computer Science Education (pp. 745–775). New York, NY, modalities shape novice programming practices. International Journal of Child-
USA. : Association for Computing Machinery. 10.1145/3287324.328739. Computer Interaction, 17, 83–92. 10.1016/j.ijcci.2018.04.00.
Shumway, J. F., Welch, L. E., Kozlowski, J. S., Clarke-Midura, J., & Lee, Whalley, J., Settle, A., & Luxton-Reilly, A. (2021). Analysis of a Process for Introductory
V. R. (2021). Kindergarten students’ mathematics knowledge at work: The Debugging. In Australasian Computing Education Conference (ACE ’21) (pp. 11–20).
mathematics for programming robot toys. Mathematical Thinking and Learning, New York, NY: Association for Computing Machinery. 10.1145/3441636.3442300.
10.1080/10986065.2021.1982666. Wilson, A., & Moffat, D.C. (2010). Evaluating scratch to introduce younger schoolchildren
Silvis, D., Clarke-Midura, J., Shumway, J., & Lee, V. R. (2021). Objects to debug with: to programming. PPIG.
How young children resolve errors with tangible coding toys. In E. de Vries, Y. D. Ho, Wing, J. (2008). Computational thinking and thinking about computing. Philosophical
& J. Ahn (Eds.), Proceedings of the 15th International Conference of the Learning Sciences Transactions of The Royal Society, 366(1881), 3717–3725. 10.1098/rsta.2008.0118.
- ICLS 2021 (pp. 147–154). Bochum, Germany: International Society of the Learning Wing, J. (2010). Computational Thinking: What and Why?. https://www.cs.cmu.
Sciences. edu/∼CompThink/resources/TheLinkWing.pdf
Sipitakiat, A., & Nusen, N. (2012). Robo-Blocks: Designing Debugging Abilities in a Tangi- Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
ble Programming System for Early Primary School Children. In Proceedings of the 11th 10.1145/1118178.1118215.
International Conference on Interaction, Design and Children. Xu, S., & Rajlich, V. (2004). Cognitive process during program debugging. In Proceedings
Solomon, C., & ). Computer environments for children: A reflection on theories of learning and of the 3rd IEEE International Conference on Cognitive Informatics (pp. 176–182).
education. Cambridge, M.A. (1986): MIT Press. Yadav, A., Hong, H., & Stephenson, C. (2016). Computational Thinking for All: Peda-
Solomon, C., Harvey, B., Kahn, K., Lieberman, H., Miller, M., Minksy, M., Papert, A., & Sil- gogical Approaches to Embedding 21st Century Problem Solving in K-12 Classrooms.
verman, B. (2020). History of Logo. Proceedings of the ACM on Programming Languages, TechTrends, 60, 565–568.
4(HOPL). Yin, K. R. (2009). Case study research. Design and methods.: SAGE.
Spinellis, D. (2017). Effective debugging: 66 specific ways to debug software and systems.
Addison Wesley-Pearson Education.

158

You might also like