CS1 with a Side of AI: Teaching Software Verification for Secure Code in the Era of Generative AI
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
As AI-generated code promises to become an increasingly relied upon tool for software developers, there is a temptation to call for significant changes to early computer science curricula. A move from syntax-focused topics in CS1 toward abstraction and high-level application design seems motivated by the new large language models (LLMs) recently made available. In this position paper however, we advocate for an approach more informed by the AI itself - teaching early CS learners not only how to use the tools but also how to better understand them. Novice programmers leveraging AI-code-generation without proper understanding of syntax or logic can create "black box" code with significant security vulnerabilities. We outline methods for integrating basic AI knowledge and traditional software verification steps into CS1 along with LLMs, which will better prepare students for software development in professional settings.