In a rush to adopt and experiment with AI, developers and other technology practitioners are willing to cut corners. This is evident from multiple recent security incidents, such as:
- Platform resource abuses (attackers hijack cloud infrastructure to power their own LLM applications)
- Vendors offering unsafe 3rd-party model execution (Probllama)
- Model escape vulnerabilities in hosting services (Replicate, HuggingFace and SAP-AI vulnerabilities)
Yet another side-effect of these hasty practices is the leakage of AI-related secrets in public code repositories. Secrets in public code repositories are nothing new. What’s surprising is the fact that after years of research, numerous security incidents, millions of dollars in bug bounty hunters’ pockets, and general awareness of the risk, it is still painfully easy to find valid secrets in public repositories. — Read More