I don't post often. But this one matters to me.
Yesterday I attended a presentation by Rob van der Veer (Chief AI Officer at SIG, founder of the OWASP AI Exchange), and it pushed me to finally publish something I'd been sitting on.
I play with AI tools a lot — Cursor, Copilot, Claude Code, Windsurf — to really understand them. And I keep seeing the same thing: people aren't ignoring the risks on purpose. They just don't know the risks exist.
Meanwhile, companies split into two camps: full speed ahead and hope for the best, or lock everything down out of fear. There's a middle ground nobody's talking about.
Most "AI security incidents" are boring. The agent reads your .env. It suggests a package that doesn't exist (and an attacker already registered that name). It edits your CI pipeline to "fix" something. Not malicious — just unguided.
So I built ai-repository-security-baseline.
One AGENTS.md with the rules. Tool-specific config files (Cursor, Copilot, Cline, Aider, Devin, Junie, Amazon Q, and more) all point to it. Drop the files in your repo, fill in the placeholders, commit. Done.
Ten minutes of setup. Sensible defaults. Then you go back to building.
Fork it, improve it, tear it apart, suggest changes, enjoy!
→ github.com/the-missing-pink/ai-repository-security-baseline
United States
NORTH AMERICA
Related News
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
20h ago
UCP Variant Data: The #1 Reason Agent Checkouts Fail
6h ago

Décryptage technique : Comment builder un téléchargeur de vidéos Reddit performant (DASH, HLS & WebAssembly)
16h ago
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Encryption Protocols for Secure AI Systems: A Practical Guide
20h ago