Here is the uncomfortable truth about vibe coding: AI is very good at generating code that works and very bad at generating code that is secure. These are different things. A login page can function perfectly — accept credentials, redirect to the dashboard, show the right data — while being trivially exploitable by anyone who opens their browser's developer tools.

This is not a theoretical concern. In May 2025, security researcher Michael Skelton audited 1,645 Lovable-generated applications and found vulnerabilities in 170 of them — roughly 10%. The issues included exposed API keys, missing authentication on API endpoints, and database access without row-level security. These were not sophisticated attacks. They were things any moderately curious person could discover in minutes.

This guide covers the specific security risks in AI-generated code, shows you what to check before launching, and tells you when the right move is to hire a professional security review.

Why AI-Generated Code Has Security Problems

The root cause is straightforward: AI coding tools are trained to generate code that works, not code that is secure. When you prompt an AI to "build a user dashboard that shows their profile and order history," the AI focuses on making that functionality work. It does not, by default, think about:

A professional developer thinks about these things instinctively because they have been trained to. Security is part of their mental checklist for every feature. AI tools do not have this instinct. They generate the happy path — what happens when everything goes right — and leave the security gaps for you to find. Or for someone else to find first.

The Five Most Common Vulnerabilities

These are the security issues we see most frequently in vibe-coded applications, ranked by how common they are and how serious the consequences can be.

1. Missing Row-Level Security (RLS)

This is the single most dangerous vulnerability in vibe-coded apps that use Supabase. Row-level security controls which database rows each user can access. Without it, any authenticated user can read, modify, or delete any other user's data.

Here is how it happens: you ask the AI to build a notes app. The AI creates a notes table and writes queries to insert and retrieve notes. The queries work. Your app functions. But the AI did not add RLS policies to the table, which means the Supabase client — which runs in the user's browser — has unrestricted access to every row in the table. A user can open the browser console, call the Supabase API directly, and read everyone's notes.

The fix is to enable RLS on every table and write policies that restrict access. Our security basics guide walks through this step by step. If you use Supabase, this is the first thing to check.

2. Exposed API Keys and Secrets

AI tools routinely place API keys, database connection strings, and third-party service credentials directly in client-side code. This means anyone viewing your page source or network requests can see and steal your keys.

The consequences depend on the key. An exposed Stripe secret key means someone can issue refunds, create charges, or access your customer data. An exposed Supabase service role key bypasses all RLS policies. An exposed OpenAI key means someone can run up thousands of dollars in API charges on your account.

What to check: Search your entire frontend codebase for strings that start with sk_ (Stripe), service_role (Supabase), or any key that is not explicitly labeled as a public/anon key. Environment variables should never be prefixed with NEXT_PUBLIC_ or VITE_ unless they are genuinely safe to expose to browsers.

3. Missing Authentication on API Routes

Your app has a beautiful login page. Users sign in before seeing the dashboard. But the API endpoints that serve the dashboard data do not check whether the request comes from an authenticated user. Anyone with the endpoint URL can fetch the data directly, bypassing the login entirely.

This happens because the AI treats the frontend and backend as separate problems. It adds authentication to the UI (showing/hiding pages based on login status) but forgets to add it to the API routes. The frontend is a lock on a glass door — it looks secure, but walking around it is trivial.

What to check: Every API route that returns user-specific data or performs user-specific actions must verify the authentication token. Call your API endpoints directly using a tool like curl or Postman, without any authentication headers. If they return data, they are not protected.

4. No Input Validation or Sanitization

AI-generated code typically trusts user input. If a form expects an email address, the AI writes code that accepts whatever the user types and stores it in the database. This opens the door to several attacks:

Modern frameworks like React and Next.js mitigate some of these risks by default (React escapes HTML in JSX, for example), but server-side code generated by AI often lacks parameterized queries, input length limits, and type validation.

What to check: Every form field should validate input type, length, and format on both the client and server side. Database queries should use parameterized statements, never string concatenation. If your AI generated a query that looks like `SELECT * FROM users WHERE id = ${userId}` with template literals, that is a SQL injection vulnerability.

5. Overly Permissive CORS and Headers

AI tools frequently set CORS (Cross-Origin Resource Sharing) headers to allow requests from any domain — Access-Control-Allow-Origin: *. This is fine during development but dangerous in production. It means any website on the internet can make requests to your API on behalf of your users.

What to check: Your production API should only accept requests from your own domain. Check your CORS configuration and set the allowed origins to your specific domain, not a wildcard.

The Pre-Launch Security Checklist

Before you deploy any vibe-coded application that will handle user data, run through this checklist. It is not exhaustive, but it catches the issues that account for 90% of security problems in AI-generated code.

Check What to Verify Priority
RLS enabled Every Supabase table has RLS on with appropriate policies Critical
No exposed secrets No API keys, service role keys, or passwords in client code Critical
API auth Every API route checks authentication before returning data Critical
Input validation All user input validated on both client and server High
CORS configured Only your domain is allowed in production CORS settings High
HTTPS enforced All traffic redirected to HTTPS, no mixed content High
Rate limiting API endpoints have rate limits to prevent abuse Medium
Error messages Errors do not expose stack traces, file paths, or internal details Medium
Dependencies audited Run npm audit and address critical vulnerabilities Medium
Admin routes protected Admin pages and APIs require admin-level authentication High

How to Prompt AI for More Secure Code

You cannot fix all security issues through prompting alone, but you can significantly reduce them by being explicit about security requirements. Here are specific prompt strategies that improve the security of AI-generated code.

Be explicit about security requirements. Instead of "build a user profile page," say "build a user profile page with authentication checks on the API route, RLS policies on the profiles table, and input validation on all editable fields." The AI will not add security features you do not ask for.

Ask for security review as a separate step. After generating a feature, prompt: "Review this code for security vulnerabilities. Check for exposed secrets, missing authentication, SQL injection, XSS, and insecure CORS configuration. List every issue you find." This separate review pass catches issues the generation pass missed.

Request RLS policies explicitly. When creating database tables, always include "write RLS policies that ensure users can only access their own data" in your prompt. Better yet, provide the specific policy: "Enable RLS and add a policy: users can only SELECT, INSERT, UPDATE, and DELETE rows where auth.uid() = user_id."

For more prompting strategies, see our guide to writing better prompts for AI coding tools.

When to Hire a Security Review

There is a point where self-review is not enough. Here are the situations where paying for a professional security audit is the right investment.

Before processing payments. If your app handles credit card information or connects to payment processors, a security review is non-negotiable. A breach involving payment data can result in fines, lawsuits, and permanent loss of payment processing ability. A focused security review of your payment integration costs $500-2,000 and is worth every dollar.

Before handling sensitive personal data. Healthcare information (HIPAA), financial records, children's data (COPPA), or European personal data (GDPR) all carry regulatory requirements. Non-compliance has real legal consequences. Get a professional to verify your data handling.

When you have more than 100 active users. At this point, the potential damage from a breach is significant enough to justify a $1,000-3,000 security audit. Think of it as insurance — you are protecting your users and your reputation.

When you are unsure. If you read this article and feel uncertain about whether your app is secure, that uncertainty is itself the answer. A few hours of professional review will give you confidence or catch problems before your users discover them.

Tools That Help

Several tools can automate parts of your security review. None of them replace human judgment, but they catch the low-hanging fruit.

The Bottom Line

Vibe coding's security problem is real but manageable. The vulnerabilities in AI-generated code are not exotic or sophisticated — they are basic issues that have well-known fixes. The problem is that AI tools do not apply those fixes automatically, and non-technical builders do not know to look for them.

If you take one thing from this article: enable RLS, move your secrets to environment variables, and test your API endpoints without authentication. Those three checks catch the majority of security issues in vibe-coded applications. Do them before your first user signs up, not after your first security incident.


Secure Your Stack

Browse auth, database, and monitoring tools with honest security assessments for each.

Browse All Tools