According to Stack Overflow’s 2025 Developer Survey, nearly 72% of developers said *“vibe coding” – defined as generating entire applications from prompts – is not part of their workflow, with an additional 5% emphatically rejecting it as ever becoming a part of their workflow.
Empirical research and position papers published this year provide some more context.
Sarkar, A., (University of Cambridge and University College London) and Drosos, I., (Microsoft) conducted an observational study (June, 2025) with 12 professional developers from Microsoft, all experienced in programming and familiar with tools like GitHub Copilot. Participants used a conversational LLM-based coding interface to complete programming tasks, with the researchers analyzing session transcripts, interaction logs, and follow-up interviews to identify usage patterns and cognitive strategies. They found that while participants reported efficiency gains for familiar or boilerplate tasks, particularly when generating or modifying standard patterns, these benefits diminished for more complex assignments.
Debugging AI-generated code remained a major friction point, often requiring developers to mentally reverse-engineer the logic or manually rewrite portions of the output. Importantly, users expressed consistent uncertainty about the correctness and reliability of generated code, underscoring that trust in the AI remained limited.
Gadde, A., (May, 2025), in their literature review based paper, positions vibe coding as the next evolution in AI-assisted software development, arguing that it significantly lowers barriers to entry by enabling users to generate working software from natural language prompts. Gadde characterizes vibe coding as a practical middle ground between low-code platforms and agentic AI systems, combining human intent expression with generative code synthesis. Unlike traditional development workflows, Gadde claims vibe coding empowers users—even those without formal programming experience—to act as high-level specifiers, while generative models handle much of the underlying implementation.
Sapkota, R., et al. (2025) conducted a structured literature review and conceptual comparison of two emerging AI-assisted programming paradigms: vibe coding and agentic coding. The paper defines vibe coding as an intent-driven, prompt-based programming style in which humans interact with an LLM through conversational instructions, iteratively refining output. By contrast, agentic coding involves AI agents that autonomously plan, code, execute, and adapt with minimal human input. The authors argue that these paradigms represent distinct axes in AI-assisted development—one human-guided and interactive, the other goal-oriented and autonomous.
They propose a comparative taxonomy based on ten dimensions, including autonomy, interactivity, task granularity, execution environment, and user expertise required. They claim that vibe coding excels in creative, exploratory, and early-stage prototyping contexts, while agentic coding shows promise in automating repetitive, well-scoped engineering tasks. However, both approaches face common challenges, including error handling, debugging, quality assurance, and system integration. The authors conclude that hybrid systems combining the strengths of vibe coding and agentic coding—conversational guidance with agentic automation—may be the most practical path forward.
Stephane H. Maes, CTO and CPO at IFS & ESSEM Research, in their literature review and enterprise experience-based position paper (April 2025), state that code written through vibe coding often lacks documentation, architectural coherence, and design rationale. Without rigorous standards and tooling for verification, maintainability, and lifecycle control, the adoption of AI-generated code introduces operational risks. Maes proposes that successful adoption of vibe coding in production environments requires not just technical integration but structured governance—workflows, tooling, and cultural norms that enforce accountability, traceability, and testability. The core thesis is that “real coding is support and maintenance,” and vibe coding, in its current form, largely sidesteps these responsibilities.
And yet, despite these limitations and negative developer experience, vibe coding remains very much a part of the conversation. Why? Not because it works at scale today, but because it gestures toward a future where programming feels more like intent-driven design than manual construction. It flatters a seductive idea: that software can be summoned by describing it, rather than engineered line by line.
Gadde, A., (May, 2025), in their literature review based paper, positions vibe coding more positively as the next evolution in AI-assisted software development, arguing that it significantly lowers barriers to entry by enabling users to generate working software from natural language prompts. Gadde characterizes vibe coding as a practical middle ground between low-code platforms and agentic AI systems, combining human intent expression with generative code synthesis. Unlike traditional development workflows, Gadde claims vibe coding empowers users—even those without formal programming experience—to act as high-level specifiers, while generative models handle much of the underlying implementation.
Engineers don’t just build systems for today, they chart trajectories. And so, with today’s special feature, we aim to: