Guided Code Generation with Large Language Models and Static Code Analysis

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

We employ an iterative LLM-assisted code-generation workflow coupled with static analysis. For each prompt, the LLM produces an initial code candidate, which is evaluated using SonarQube. Analysis findings (security, quality, maintainability) are fed back to the model to guide regeneration. Iteration continues until issue thresholds are satisfied or a maximum iteration count is reached.

We evaluated our approach on 50 code generation prompts covering security vulnerabilities, code quality, algorithmic tasks, and complexity challenges. Each prompt underwent up to five iterative refinement cycles, incorporating SonarQube feedback into the LLM’s next generation. For each run, we recorded initial and final issue counts, iteration numbers, and categorized issues by type and severity. 94% of all runs successfully completed, resolving all detected issues after an average of 1.34 refinement iterations, with the majority (74%) requiring only a single iteration.
Original languageEnglish
Title of host publicationEUROCAST 2026 Computer Aided Systems Theory EXTENDED ABSTRACTS
Publication statusPublished - Feb 2026

Fingerprint

Dive into the research topics of 'Guided Code Generation with Large Language Models and Static Code Analysis'. Together they form a unique fingerprint.

Cite this