Nvidia has joined the National Institute of Standards and Technology’s new US Artificial Intelligence Safety Institute Consortium (AISIC) as part of the company’s effort to advance safe, secure and trustworthy AI.

AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, Nvidia will work with NIST — an agency of the US Department of Commerce — and fellow consortium members to advance the consortium’s mandate.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, Nvidia has a number of initiatives to make AI safety a reality.

Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI.

AISIC members, which include more than 200 of the nation’s leading AI creators, academics, government and industry researchers, as well as civil society organisations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.

In addition to participating in working groups, Nvidia plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several Nvidia-developed, open-source AI safety, red-teaming and security tools.