Our investment in Purple Llama reflects a comprehensive approach, seamlessly integrated to guide developers through the AI innovation landscape, from ideation to deployment. It brings together tactics for testing, improving, and securing generative AI, to support your mitigation strategies.
We are making available what we believe to be the industry’s first and most comprehensive set of open source cybersecurity safety evaluations for large language models (LLMs): CyberSecEval.
Llama Guard is a high-performance model designed to enhance your existing API-based safeguards. This model is adept at identifying various common types of potentially risky or violating content, catering to a range of developer use cases. It is crucial, however, to regard this tool as a flexible starting point rather than a universal solution.
Think of Llama Guard as both an example and a launchpad for customization, offering valuable insights into detecting common risks. To optimize its effectiveness, developers are encouraged to utilize Llama Guard as a supplementary tool, seamlessly integrating it into their existing mitigation strategies.
To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size.
The Responsible Use Guide is a resource for developers that provides recommended best practices and considerations for building products powered by LLMs in a responsible manner, covering various stages of development from inception to deployment.
With Purple Llama we are furthering our commitment to an open approach to build generative AI with trust and safety in mind. We believe in transparency, open science, and cross-collaboration, and to date, we’ve released over a thousand open-source libraries, models, datasets, and more. The launch of Purple Llama is another contribution towards creating an open AI ecosystem, involving academics, policymakers, industry professionals, and society at large in the responsible development of generative AI.
In fostering a collaborative approach, we look forward to partnering with ML commons, the newly formed AI alliance, and a number of leading AI companies.