Over the last two decades, Computer Science (CS) has emerged as a field of study et al.most all levels of education. Computer science has gone beyond an important skill required for a wide range of modern professions, to become an essential competence for everyday life. CS is a young domain that is still developing effective teaching traditions. CS is also a very dynamic domain, where technologies, skills and even subfields are constantly emerging and evolving, challenging CS education researchers to find ways to promote effective education even while its core concepts are being defined.

The critical need to increase access to computer science education is highlighted by President Obama’ “CSforAll” initiative to provide CS education for all K-12 children in the United States (Smith 2016). Because of the central importance of Computer Science to innovation, it is increasingly important to expand equitable access to CS education. One potential way to reach more people, more effectively lies with effective technology-enhanced learning and teaching approaches, and, especially those enhanced with Artificial Intelligence (AI) techniques.

The field of AI-supported education (AIED) covers a wide spectrum of technologies and approaches. Dating back to the 1980’s, researchers in this community were interested in the association between education and AI, what mainly focused on knowledge representation, reasoning and learning (Self 2015, p.5). According to Russell and Norvig (2016), AI includes problem solving, representation and reasoning of certain/uncertain knowledge, machine learning, and communicating, perceiving and acting techniques for designing and developing intelligent agents. More recently, Pinkwart (2015) pointed out that various developments of technology (e.g., sensor technologies, graphics technologies, or cognitive architectures) strongly influence the on-going development of AIED. In this special issue, we broadly define “AI-supported education for Computer Science” as any CS educational system or techniques supported by AI or technology.

The papers published in this special issue address two challenging, and complementary, areas of AI-supported education in Computer Science: the implementation of innovative learning tools and the approaches to apply these tools in real educational practices. First, the field needs to see innovative and highly effective tools that support Computer Science competencies. Such tools should employ advanced AI techniques that are developed based on a deep understanding of CS pedagogy, students and related domains. Second, teachers looking to employ such tools need support with integrating them into their educational practices. This includes both learning how to use the new tool and customizing existing learning activities to leverage the new tool. In the remainder of this preface, we briefly outline the papers selected for this special issue.

In “Evolution of an Intelligent Deductive Logic Tutor using Data-Driven Elements,” Mostafavi and Barnes describe their fully-data driven approach that leverages past student work to model learning and problem-solving. The authors augmented their logic tutor, Deep Thought, by breaking its original problem sets into levels and modeling students’ logic rule application strategies to assess student proficiency. This paper reports a series of user studies with multiple instructional interventions, including data-driven worked examples and hints, to examine their impacts on learning. The results show that data-driven methods expose students to more logic concepts and reduce time to tutor completion. Impressively, the system reduces dropout from over 50 % with the original problem-solving environment to less than 10 % with the data-driven intelligent tutor.

In “Data-Driven Hint Generation in Vast Solution Spaces: A Self-Improving Python Programming Tutor,” Rivers and Koedinger apply a similar a data-driven approach that incrementally generates a solution space for programming problems. The authors define a solution space as a graph of intermediate states that students pass through as they work on a given problem, similar to the basis for hint generation in the Deep Thought tutor. Each state is represented by the student’s current code. The system starts the process of path construction by making use of different exemplar solutions and inserting new “correct” states into the solution space. Rivers and Koedinger have tested this approach in the domain of Python, a programming language that supports different programming paradigms: object-oriented, aspect-oriented, and functional programming. Thus, this approach seems to be promising.

Gerdes, Heeren, Jeuring, and van Binsbergen propose a strategy-based model tracing and property-based testing approach to modeling the domain of Haskell, a functional programming language, in their paper “Ask-Elle: a teacher-adaptable programming tutor for Haskell giving automated feedback.” Ask-Elle is able to derive the intention of the student using a set of pre-specified model solutions. Then, using the programming strategy in the model solution, many variants are generated. All generated solutions are normalized before they are used to compare with the student program. Through the comparison between the student program and the generated solutions, the system is able to verify the correctness of Haskell programs and to provide hints. Contrary to the classical approaches toward building tutoring systems for programming, which are usually based on one or few ideal solutions for checking the correctness of student solutions, this approach incorporates different normalization, transformation and testing methods that enlarge the space of correct solutions. These strategies form a common thread with the paper by Rivers and Koedinger, who have used similar strategies to model correct student programs.

Two papers in this issue introduce the AI-supported dialogue-based instructional approach, which adopt the idea of engaging students in the process of learning using natural language dialogues. In “Shifting the Load: A Peer Dialogue Agent that Encourages Its Human Collaborator to Contribute More to Problem Solving,“ Howard, Jordan, Di Eugenio, and Katz develop an artificial peer collaborator for aiding Computer Science students. The peer agent monitors the student’s collaborative behavior and provides guidance towards more productive initiative shifts. The study reports that students learn when interacting with the peer agent and that the agent’s tactics for encouraging the student to begin taking the initiative were helpful. In “Do You Think You Can? The Influence of Student Self-Efficacy on the Effectiveness of Tutorial Dialogue for Computer Science,” Wiggins, Grafsgaard, Boyer, Wiehe, and Lester study the influence of student self-efficacy, the extent to which a student believes he or she can achieve learning objectives in Computer Science, in tutorial dialogue. They found that tutorial dialogue can offer highly effective support for introductory Computer Science learning, but that students’ self-efficacy may need to be taken into account. For example, students with low self-efficacy appeared to benefit from a higher level of tutor interaction, achieving increased learning gains and experiencing lower frustration. Their results provide insight for the next generation of tutorial dialogue systems that support Computer Science learning.

In “Lab4CE: a Remote Laboratory for Computer Education,” Broisin, Venant, and Vidal deploy visualization technologies to develop a virtual laboratory for supporting Computer Science education and investigate the research question: How does the scaffolding around the lab increases students’ engagement in remote practical learning of computer science? The authors conduct an exploratory study with 139 undergraduate students, finding that practice with the remote virtual lab has positive effect on the learners’ engagement. There is also a positive correlation between practice with the system and learning outcome.

While the previous papers address knowledge-based adaptive support, Bosch and D’Mello explore “The Affective Experience of Novice Computer Programmers” working with a computerized learning environment for Python. In a lab study with 99 students, the most frequent affective states were engagement, confusion, frustration, boredom, and curiosity. In addition, the authors found that confusion and frustration followed errors and preceded hint usage, while curiosity and engagement followed reading or coding. Boredom and frustration were negatively correlated with learning while transitions between confusion → frustration, frustration → confusion, and boredom → engagement were positively correlated with learning.

Hatzilygeroudis and Grivokostopoulou focus on another important educational activity in CS courses in “An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance.” Their system AITS supports two kind of interactive exercises: practice exercises equipped with instructional feedback to help students to acquire new skills, and assessment exercises with automatic grading and error detection. The AITS error detection and grading modules were compared with expert grading, showing that the grades produced by the system are highly correlated with teacher assessments of student answers.

Eight papers in this special issue contribute to both the AIED and Computer Science education communities. We thank the authors for their excellent and novel advances in AI-supported CS education, and we express our gratitude to all of the reviewers to whom we are indebted for their service and invaluable feedback.