Large Language Models (LLMs) have had considerable difficulty when prompted with mathematical questions, especially those within theory of computing (ToC) courses. In this paper, we detail two experiments regarding our own ToC course and the ChatGPT LLM. For the first, we evaluated ChatGPT’s ability to pass our own ToC course’s exams. For the second, we created a database of sample ToC questions and responses to accommodate other ToC offerings’ choices for topics and structure. We scored each of ChatGPT’s outputs on these questions. Overall, we determined that ChatGPT can pass our ToC course, and is adequate at understanding common formal definitions and answering “simple”-style questions, e.g., true/false and multiple choice. However, ChatGPT often makes nonsensical claims in open-ended responses, such as proofs.
Sat 7 DecDisplayed time zone: (UTC) Coordinated Universal Time change
22:00 - 23:00 | |||
22:00 30mPaper | Can ChatGPT pass a Theory of Computing Course? Conference Matei Golesteanu United States Military Academy, Garrett Vowinkel United States Military Academy, Ryan Dougherty United States Military Academy | ||
22:30 30mPaper | Hash Table Notional Machines: A Comparison of 2D and 3D Representations Conference Colleen M. Lewis University of Illinois Urbana-Champaign, Craig S. Miller DePaul University, Johan Jeuring Utrecht University, Janice Pearce Berea College, Andrew Petersen University of Toronto |