Recent years have seen increasing awareness of the need to engage young learners in computational thinking (CT). Integrating digital storytelling, where students create short narratives, and CT offers significant potential for promoting interdisciplinary learning for students; however, it is critical to provide both teachers and students with automated support. A promising approach for enabling support is to leverage advances in Large Language Models (LLMs), which have demonstrated considerable potential for assessing both programming and natural language artifacts. In this work, we investigate the capabilities of LLMs to automatically assess student-created block-based programs developed using a narrative-centered learning environment that engages upper elementary students (ages 9 to 11) in learning CT and physical science through the creation of interactive science narratives. Using the narrative programs created by 28 students, we explore the efficacy of LLMs to assess the programs across two dimensions.