A secure local sandbox to run LLM-generated code using Apple containers
-
Updated
Feb 4, 2026 - Python
A secure local sandbox to run LLM-generated code using Apple containers
Security scanner for local LLMs scanning LLM vulnerabilities including jailbreaks, prompt injection, training data leakage, and adversarial abuse
AI Diet Assistant: React + Flask + local LLM for personalized meal plans and nutrition insights.
🔍 Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.
Add a description, image, and links to the llmstudio topic page so that developers can more easily learn about it.
To associate your repository with the llmstudio topic, visit your repo's landing page and select "manage topics."