With the rise of AI tools and coding assistants like GitHub Copilot and ChatGPT, a new phenomenon has entered the web development landscape. A phenomenon called ‘Vibe Coding’.
Vibe coding can be seen as writing code based on intuition, trends, or esthetics rather than established standards and best practices. Combined with AI, it allows anyone to quickly build an app or website without needing knowledge of software development.
This essay explores how vibe coding, when powered by AI, undermines web security by introducing unpredictable vulnerabilities, eroding code quality and making software harder to audit and maintain.
To understand the risks, it is important to examine how AI-driven vibe coding manifests in real-world projects.
What Is AI-Driven Vibe Coding?
Vibe coding is a style of software development where decisions are made based on what “feels right” in the moment. It often has a heavy reliance on copy-pasting code from forums or AI tools without verifying the safety or suitability.
Combined with AI tools, such as ChatGPT or GitHub Copilot, vibe coding can be practically done by anyone without the need of a background in software engineering. Though it can significantly accelerate development, when the generated code cannot be interpreted by the coder, it’ll amplify the risks of introducing vulnerabilities.
Security Risks Introduced by AI-Driven Vibe Coding
AI-generated code can reproduce insecure coding patterns, as its training data is mostly based on publicly available examples. It can lead to vulnerabilities such as improper authentication flows or exposure to sensitive information.
As AI models prioritize speed and functionality, they may omit critical security checks like encryption or input validation unless explicitly instructed by the prompter. This means that, if developers are relying on AI suggestions without review and understanding, they might introduce subtle but critical weaknesses in their applications.
If we look at the recent ‘Tea App’ incident, though we don’t have any direct evidence to label vibe coding as the root cause, we can see the implications of rapid development with AI.
The breach was traced to basic security failures. It was caused by an open database and broken access controls, showing that the lack of review and security best practices demonstrate how AI powered vibe coding can introduce traditional risks that eventually will lead to catastrophic outcomes. 1
Impact on Maintainability and Auditing
Beyond the immediate security issues. AI-driven vibe coding will also impact the codebase maintainability and auditing. The generated code can be inconsistent, poorly documented and difficult to read, especially when developers rely on it without fully understanding the workings of what has been generated.
A large-scale user study 2 done by researchers at Stanford found that, overall, participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Participants with access to the AI assistant were also more likely to believe that they wrote secure code.
Another study 3 showed that AI generated code, initially introduces fewer bugs and requires less effort to fix, though when used in more complex scenarios, it can introduce critical structural issues. This study also emphasizes the need for systematic evaluation and validation before the code can be used on production environments.
To address the issues on security, maintainability and auditing requires a thoughtful approach when it comes to integrating AI into development workflows.
Solutions and Best Practices
Though AI is rapidly getting better, it still requires human oversight. Which means that, during development and before accepting code into the codebase, developers should do proper code reviews. They must critically evaluate the code and make sure that proper testing, whether manually or automated, is in place.
Now, more than ever, a codebase should have unit testing in place for important parts like authentication and critical business logic. It should have end-to-end testing to validate that all the flows run correctly. Developers should pay attention to automated testing as the risk of vulnerabilities are increased by AI-generated code.
Not only the developer, but also the organization or company should have clear defined protocols in place. Basic policies for writing and testing code should be documented. As well as policies for the use of AI tooling and generative code.
Ultimately, in order to safeguard web security, all those involved should make responsible use of AI tooling in their codebases.
Conclusion
In conclusion, while AI-driven vibe coding offers great advantages when it comes to software development, it also introduces significant risks to web security and maintainability when not used carefully and properly. To use the advantages of AI tools without compromise on security and maintainability, developers and organizations must review and adapt their workflow, enforce secure coding standards and maintain oversight.
By combining the efficiency of AI with responsible practices, we can drive innovation that does not come at a cost of security.
Citations
- “Tea’s Data Breach Shows Why You Should Be Wary of New Apps — Especially in the AI Era.” Business Insider, 2025 Aug. https://www.businessinsider.com/tea-app-data-breach-cybersecurity-ai-vibe-coding-safety-experts-2025-8 ↩︎
- Perry, N., Srivastava, M., Kumar, D., & Boneh, D. (2023). Do Users Write More Insecure Code with AI Assistants? Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2785–2799. https://arxiv.org/abs/2211.03622 ↩︎
- Santa Molison, A., Moraes, M., Melo, G., Santos, F., & Assunção, W. K. G. (2025). Is LLM-Generated Code More Maintainable & Reliable than Human-Written Code? arXiv preprint arXiv:2508.00700. https://arxiv.org/html/2508.00700v1 ↩︎