LLM Guard provides extensive evaluators for both inputs and outputs of LLMs, offering sanitization, detection of harmful language and data leakage. Learn more here.
The post LLM Guard: Open-Source Toolkit for Securing Large Language Models appeared first on Linux Today.
Source: Linux Today – LLM Guard: Open-Source Toolkit for Securing Large Language Models