Vibe Coding Security: Why "If You Can't Run a Command Line, This Is Dangerous"

Pieter Levels β€” the indie hacker behind Nomad List, Remote OK, and dozens of profitable bootstrapped products β€” is one of the most credible voices in the solo-founder space. When he posted a warning about vibe coding in early 2025, it wasn't the kind of cautionary note that gets dismissed as gatekeeping. It was specific, grounded, and sobering:

"If you can't run a command line, vibe coding is genuinely dangerous."

The context: Levels had been experimenting with AI coding tools and watching others use them to ship production software without understanding the code they were deploying. His concern wasn't that people were building β€” it was that they were building systems that handled real user data, real payments, and real authentication, without understanding the security implications of what the AI had generated.

This post is about taking that warning seriously. Not to discourage non-technical founders from building β€” but to give you a clear-eyed picture of where the risks are and what minimum viable security competence looks like.

Why AI-Generated Code Has Systematic Security Gaps

AI code generation tools are trained on the totality of publicly available code β€” which includes an enormous amount of code written with poor security practices, outdated patterns, and outright vulnerabilities. The models generate plausible, working code. They are not optimized to generate secure code.

Common security failures in AI-generated code:

SQL Injection Vulnerabilities

AI tools regularly generate database query code that concatenates user input directly into SQL strings rather than using parameterized queries. This is one of the most well-documented and dangerous vulnerability classes β€” it allows an attacker to manipulate your database with crafted input.

A seemingly innocent search feature built with a quick AI prompt can become a vector for extracting your entire user database, including emails and hashed passwords.

Exposed API Keys and Secrets

AI tools frequently generate example code with hardcoded credentials, placeholder values, and environment variable references that beginners then replace with real values β€” and accidentally commit to public GitHub repositories.

GitHub's secret scanning detected over 1 million exposed secrets in 2024. A significant and growing proportion of these exposures come from AI-generated code samples where the developer didn't understand that credentials belong in environment variables, not in source code.

Insecure Authentication Flows

Authentication is genuinely hard to get right. AI-generated authentication code often misses critical details: missing rate limiting on login endpoints (allows brute force attacks), improper session token generation, missing HTTPS enforcement, or storing passwords in reversible rather than one-way hashed format.

Hallucinated Vulnerable Dependencies

AI models sometimes reference libraries that don't exist, or suggest outdated library versions with known vulnerabilities. When a non-technical founder runs npm install on whatever the AI suggested, they may be installing dependencies with published CVEs (Common Vulnerabilities and Exposures).

What Non-Developers Must Understand Before Shipping

You don't need to be a security engineer. You need to understand enough to avoid the most common and damaging mistakes.

Environment Variables Are Not Optional

Never put API keys, database passwords, or any credentials in your source code. Full stop. Use environment variables β€” .env files for local development, your platform's secret management system for production. Never commit a .env file to a public repository.

This is the single most important thing to understand. A leaked Stripe API key or database password can result in thousands of dollars in fraudulent charges or complete data loss.

HTTPS Is Table Stakes

If your application handles user accounts, payments, or any personal data, it must run exclusively over HTTPS. Your hosting platform (Vercel, Render, Railway, Fly.io) makes this easy β€” but you need to confirm it's enabled and that you're redirecting HTTP to HTTPS.

Authentication Libraries Over Custom Auth

Do not build authentication from scratch using AI-generated code. Use established libraries β€” Auth.js, Clerk, Supabase Auth, or similar. These have been reviewed, tested, and maintained by security-focused teams. Custom auth code almost always has gaps.

Rate Limiting on Sensitive Endpoints

Your login page, your password reset endpoint, and your API should have rate limiting. Without it, anyone can attempt thousands of password combinations in seconds. Most hosting platforms offer this as middleware β€” enable it.

Tools That Help (Most Are Free)

GitHub Secret Scanning: Enabled by default on public repositories, available as a setting on private ones. It will alert you if you accidentally commit a credential.

Snyk Free Tier: Scans your dependencies for known vulnerabilities. Run it before you launch. It integrates with GitHub and takes about 10 minutes to set up.

OWASP Top 10: Not a tool but a document β€” the Open Web Application Security Project's list of the ten most critical web application security risks. Read it once before you ship anything handling user data.

Semgrep Free Tier: Static analysis tool that can scan your codebase for common security patterns. Supports multiple languages.

Using AI to Audit AI: This works surprisingly well. After generating a feature with Claude or GPT-4, follow up with: "Review this code for security vulnerabilities, focusing on injection attacks, authentication weaknesses, and exposed credentials." The model will often catch its own mistakes.

What to Outsource

Security audit β€” the process of systematically reviewing your application for vulnerabilities β€” is genuinely specialized work. If you're handling:

  • Payments at scale (more than a few hundred transactions per month)
  • Protected health information (any medical or health data)
  • Children's data (COPPA compliance)
  • Financial data (FINRA or SEC-regulated information)

...you need a professional security review before launch, and probably SOC 2 compliance as you grow. This is not optional, and AI tools are not a substitute.

For everything else β€” a B2B SaaS tool, a content product, a simple marketplace β€” the minimum viable security checklist below covers the most common risks.

The Minimum Viable Security Checklist for Vibe Coders

Before you ship anything to real users, confirm:

  • [ ] All credentials are in environment variables, not source code
  • [ ] .env files are in .gitignore
  • [ ] HTTPS is enforced (HTTP redirects to HTTPS)
  • [ ] You're using an established auth library, not custom auth
  • [ ] Database queries use parameterized statements (ask your AI explicitly: "Does this code use parameterized queries?")
  • [ ] Rate limiting is enabled on login and password reset endpoints
  • [ ] Dependencies scanned with Snyk or equivalent
  • [ ] Error messages don't expose internal system details to users
  • [ ] Stripe or your payment processor handles card data (never store raw card numbers)

When to Hire a Real Developer

The vibe coding approach works well for validation, early MVPs, and products that don't handle sensitive data. It works less well β€” and carries real risk β€” when:

  • Your user base grows past a few hundred people
  • You're handling payment data at scale
  • You're operating in a regulated industry
  • You've had your first security incident

At that point, a 5-10 hour security review from a professional developer is cheap insurance. Services like CodementoReview or finding a developer on Contra for a security audit engagement can cost $500–$2,000 β€” a fraction of the cost of a data breach or compliance violation.

The Bottom Line

Pieter Levels' warning was not anti-vibe-coding. It was pro-awareness. The ability to ship fast with AI tools is genuinely powerful. The risks are real and manageable if you know what they are.

Build your product. Use AI to accelerate every part of the process. And before you handle someone else's data, understand what you're handling and how to protect it.

The best place to work through both the code and the security questions is surrounded by other builders who've already asked them. Coworking communities are increasingly becoming informal peer review networks β€” a place where the developer at the next desk can glance at your auth flow and tell you whether it passes the basic sniff test. That kind of ambient expertise is one of the underrated reasons founders working in flex spaces ship better products than founders working in isolation.