Growing AI adoption prompted the National Institute of Standards and Technology (NIST) to introduce Dioptra, a new open-source software tool for evaluating security risks in AI models and gauging the NIST AI Risk Management Framework's "measure" functionality, SC Media reports.
Dioptra, which has been made available on GitHub, could be leveraged to measure the extent of evasion, poisoning and oracle attacks against various AI models, including those for image classification and speech recognition, according to NIST.
"User feedback has helped shape Dioptra and NIST plans to continue to collect feedback and improve the tool," said a NIST spokesperson. Such a tool has been released by NIST alongside a new dual-use foundation model risk management draft from the agency's AI Safety Institute, which will be open for public comments until September 9.
"For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation," said NIST Director Laurie E. Locascio.