NIST Releases a Tool for Testing AI Model Risk.

Matilda
NIST Releases a Tool for Testing AI Model Risk.
Advancements in artificial intelligence (AI) bring both exciting opportunities and significant risks. Ensuring AI models' integrity and security has become crucial as they are increasingly integrated into critical systems. The National Institute of Standards and Technology (NIST) has taken a significant step forward by re-releasing Dioptra, a tool designed to measure and mitigate AI model risks. What is Dioptra? Dioptra, named after an ancient astronomical and surveying instrument, is a modular, open-source, web-based tool. Initially released in 2022, Dioptra was developed to help companies, government agencies, and individuals assess, analyze, and track AI risks. By providing a common platform for testing models against simulated threats, Dioptra aims to enhance the robustness and reliability of AI systems. The Importance of Testing AI Models AI models are vulnerable to various types of attacks, with adversarial attacks being among the most concerning. These attacks can degrade the p…