GenAI Is About to Explode the Number of Apps
Generative AI (GenAI) has already transformed how developers write code. With tools like GitHub Copilot, Cursor, Windsurf, Lovable, and Base44, we're entering a new era of "vibe coding" - where developers describe intent, and code is generated on the fly. These tools are getting so good that what once required a dedicated team can now be accomplished by a single developer with a prompt and a keyboard.
This shift is not only making development faster but fundamentally altering the pace of software creation. Ops engineers, citizen developers, and even product managers are spinning up fully functional applications with little to no traditional coding experience. We’re not just seeing incremental improvement - we're on the brink of a 5x to 10x explosion in the number of applications created, deployed, and forgotten.
GenAI Loves Dependencies - But Hides the Risk
In a viral LinkedIn post, security veteran Caleb Sima compares the effect of GenAI on software to what cloud computing did to infrastructure: it made things fast, disposable, and invisible. Just as cloud created VM sprawl, GenAI is creating app sprawl - each application is unique, but powered by a shared and largely untracked web of dependencies.
Large Language Models (LLMs) have trained on vast swaths of open source code. When they generate applications, they often include open source packages - many of which developers may never have heard of or evaluated for security. These packages are added automatically, often without developer awareness, audit logs, or context. Worse still, the models were trained on historical codebases, which means the packages they surface are often outdated, abandoned, or obscure.
These dependencies become "shadow dependencies": they exist in your environment, but without Software Bill of Materials (SBOMs), audits, or visibility. You didn’t choose them, you didn’t vet them - but they’re running in production. And they gain vulnerabilities over time.
Open Source Vulns Are 1-Day, Not 0-Day
The growing number of apps also means a massively expanded attack surface. And many of these new apps won’t be tracked, maintained, or even remembered.
This isn't theoretical. A recent incident involving an AI-generated SaaS app built using the Cursor coding tool illustrates how quickly things can go sideways. X (formerly Twitter) user @leojr94_ documented how their AI-generated app was quickly discovered and attacked, leading to its shutdown just hours later (shutdown tweet).
The core problem? Most exploited vulnerabilities in the wild aren't novel 0-days - they're known, documented, and publicly listed 1-day vulnerabilities. Attackers don’t need to innovate; they simply scan GitHub, exploit databases, or package registries for known CVEs. Exploit code is often freely available on platforms like ExploitDB or integrated into tools like Metasploit.
AI-generated apps are likely to reuse the same vulnerable open source packages over and over, creating uniform, easy-to-target clusters of insecure software.
Version and Dependency Chaos Multiplied
In traditional development, tracking dependency versions across even a handful of applications is already a logistical headache. Now imagine doing that across hundreds of GenAI-generated applications.
Each app might use slightly different versions of the same libraries. Some might pin to old versions. Some might use forks. Others might not pin versions at all, leaving the door open to breaking changes or injected vulnerabilities.
There’s no central control, no consolidated dependency management, and no visibility into what’s actually being used - a recipe for chaos.
Nobody Will Maintain These Apps
One of the biggest issues with AI-generated code is the question of ownership. When the code wasn’t fully written or understood by a human, who is responsible for maintaining it?
Developers often treat GenAI-generated apps as "fire-and-forget" solutions. They don't review the logic in depth or think about long-term maintenance. So when a vulnerability is found in a transitive dependency, who will upgrade it?
To make matters worse, AI tools often struggle to fix complex bugs or security issues in their own output. Developers then find themselves in a circular rabbit hole, trying to debug or patch code they didn’t really write and don’t fully understand.
Forgotten Apps, Forgotten Patches
Apps created in a day are also forgotten in a day. When the initial buzz fades, many AI-generated apps will simply be abandoned.
Unfortunately, attackers don’t forget. Open source vulnerabilities already go unpatched for over 100 days on average. With GenAI, that number is likely to grow. More apps mean fewer eyes on each one, and less urgency to maintain them once they’ve served their short-term purpose.
These forgotten apps will become low-hanging fruit for attackers.
We're Entering the Unmanaged Code Era
AI has turned code generation into an uncontrolled, high-speed pipeline. The barrier to entry is gone, and the volume of code in production is exploding. But while GenAI accelerates creation, our security tooling hasn’t kept pace.
We're no longer just managing code created by humans. We’re managing code generated by tools, at a velocity that defies traditional processes. We’re entering an unmanaged code era where most of the code in production won’t be fully understood or manually audited.
The Bottom Line: You Need Automation Now
Manual security processes won’t survive a 10x increase in applications. Certainly not when human developers don’t own the code.
If your organization is embracing GenAI, you need to:
- Automate open source vulnerability scanning for every app, every package, every version.
- Continuously check for license compliance before auditors do.
- Implement smart, centralized patch management that can handle change at AI-speed.
AI-generated code demands AI-grade security automation. The unmanaged code era is already here. The question is whether your security strategy is ready for it.