Join Transform 2021 this July 12-16. Register for the AI event of the year.
Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.
AI is being increasingly deployed in regulated industries like health care, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they don’t have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.
Microsoft says that Counterfit was born out the company’s need to assess AI systems for vulnerabilities with the goal of proactively securing AI services. The tool started as a corpus of attack scripts written specifically to target AI models and then morphed into an automation product to benchmark multiple systems at scale.
Under the hood, Counterfit is a command-line utility that provides a layer for adversarial frameworks, preloaded with algorithms that can be used to evade and steal models. Counterfit seeks to make published attacks accessible to the security community while offering an interface from which to build, manage, and launch those attacks on models.
When conducting penetration testing on an AI system with Counterfit, security teams can opt for the default settings, set random parameters, or customize each for broad vulnerability coverage. Organizations with multiple models can use Counterfit’s built-in automation to scan — optionally multiple times in order to create operational baselines.
Counterfit also provides logging to record the attacks against a target model. As Microsoft notes, telemetry might drive engineering teams to improve their understanding of a failure mode in a system.
The business value of responsible AI
Internally, Microsoft says that it uses Counterfit as a part of its AI red team operations and in the AI development phase to catch vulnerabilities before they hit production. And the company says it’s tested Counterfit with several customers, including aerospace giant Airbus, which is developing an AI platform on Azure AI services. “AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space,” Matilda Rhode, a senior cybersecurity researcher at Airbus, said in a statement.
The value of tools like Counterfit is quickly becoming apparent. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.
Basically, consumers want confidence that AI is secure from manipulation. One of the recommendations from Gartner’s Top 5 Priorities for Managing AI Risk framework, published in January, is that organizations “[a]dopt specific AI security measures against adversarial attacks to ensure resistance and resilience.” The research firm estimates that by 2024, organizations which implement dedicated AI risk management controls will avoid negative AI outcomes twice as often as those that don’t.”
According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.
Counterfit is a part of Microsoft’s broader push toward explainable, secure, and “fair” AI systems. The company’s attempts at solutions to those and other challenges include AI bias-detecting tools, an open adversarial AI framework, internal efforts to reduce prejudicial errors, AI ethics checklists, and a committee (Aether) that advises on AI pursuits. Recently, Microsoft debuted WhiteNoise, a toolkit for differential privacy, as well as Fairlearn, which aims to assess AI systems’ fairness and mitigate any observed unfairness issues with algorithms.
VentureBeat
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article