NVIDIA open source "guardrail" software for generative AI security

16-022-supersoniccontract

The security of generative AI is a growing concern. In response, NVIDIA has designed and open sourced NeMo Guardrails for a wide range of LLM (Large Language Model) based applications designed for this purpose.

The software contains all the code, examples, and documentation an enterprise needs to improve the security of AI applications that can generate text, and allows developers to tune LLM-driven applications to ensure that the model's driven intelligence applications are accurate, appropriate, on-topic, and secure, and that the model is within the company's domain of expertise.

LLM is being used to answer customer questions, summarize lengthy documents, and even write software and accelerate drug development, and is being adopted by a growing number of industries. The newly released open source software, NeMo Guardrails, helps developers guide generative AI applications to generate impressive text responses, developed specifically to help users secure such new AI-enabled applications.

NeMo Guardrails enables developers to build three kinds of boundaries:

1. thematic guardrails prevent applications from straying into non-target areas, such as preventing customer service assistants from answering questions about the weather.

2. Functional security guardrails ensure that applications can respond with accurate and appropriate information. They filter out unwanted language and force models to cite only reliable sources.

3. Information security guardrails restrict applications to only establish connections with external third-party applications that have been confirmed to be secure.

Almost every software developer can use NeMo Guardrails; they don't need to be a machine learning expert or data scientist, they can quickly create new rules with just a few lines of code.

NeMo Guardrails is open source and can be used with all the tools used by enterprise application developers.

For example, it can be run on the open source toolkit LangChain. A growing number of developers are using LangChain to embed third-party applications into LLM. says Harrison Chase, creator of the LangChain toolkit, "Users can easily add NeMo Guardrails to their LangChain workflow to quickly set up security for their AI applications boundaries."

NVIDIA is integrating NeMo Guardrails into the NVIDIA NeMo Framework. The framework contains everything users need to train and tune language models using the company's proprietary data, much of which is already available as open source on GitHub. As part of the NVIDIA AI Enterprise software platform, it is available to enterprises as a complete support package.

NeMo is also available as a service. As part of NVIDIA AI Foundations, this set of cloud services is aimed at enterprises that want to create and run custom generative AI models based on their own datasets and domain knowledge.

Using NeMo, Korea's leading mobile operator built an intelligent assistant that has now had 8 million conversations with customers, and a research team in Sweden used NeMo to create LLMs that automatically generate text for the country's hospitals, government and commercial organizations.

Building effective guardrails for generative AI is a challenge that will require significant ongoing research as AI evolves. NVIDIA is open-sourcing its years of research, NeMo Guardrails, to unleash the tremendous energy of the developer community and drive AI security efforts. This will help enterprises ensure that their intelligent services meet functional security, privacy and information security requirements.