HomeTech PlusArtificial IntelligenceMicrosoft open-sources Counterfit, an AI security risk assessment tool

Microsoft open-sources Counterfit, an AI security risk assessment tool

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

AI is being increasingly deployed in regulated industries like healthcare, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they don’t have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.

Microsoft says that Counterfit was born out the company’s need to assess AI systems for vulnerabilities with the goal of proactively securing AI services. The tool started as a corpus of attack scripts written specifically to target AI models and then morphed into an automation product to benchmark multiple systems at scale.

Under the hood, Counterfit is a command-line utility that provides a layer for adversarial frameworks, preloaded with algorithms that can be used to evade and steal models. Counterfit seeks to make published attacks accessible to the security community while offering an interface from which to build, manage, and launch those attacks on models.

When conducting penetration testing on an AI system with Counterfit, security teams can opt for the default settings, set random parameters, or customize each for broad vulnerability coverage. Organizations with multiple models can use Counterfit’s built-in automation to scan — optionally multiple times in order to create operational baselines.

Counterfit also provides logging to record the attacks against a target model. As Microsoft notes, telemetry might drive engineering teams to improve their understanding of a failure mode in a system.

The business value of responsible AI

Internally, Microsoft says that it uses Counterfit as a part of its AI red team operations and in the AI development phase to catch vulnerabilities before they hit production. And the company says it’s tested Counterfit with several customer including aerospace giant Airbus, which is developing an AI platform on Azure AI services.  “AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space,” Matilda Rhode, a senior cybersecurity researcher at Airbus, said in a statement.

The value of tools like Counterfit is quickly becoming apparent. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

Basically, consumers want confidence that AI is secure from manipulation. One of the recommendations from Gartner’s Top 5 Priorities for Managing AI Risk framework, published in January, is that organizations “[a]dopt specific AI security measures against adversarial attacks to ensure resistance and resilience.” The research firm estimates that by 2024, organizations which implement dedicated AI risk management controls will avoid negative AI outcomes twice as often as those that don’t.”

According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.

Counterfit is a part of Microsoft’s broader push toward explainable, secure, and “fair” AI systems. The company’s attempts at solutions to those and other challenges include AI bias-detecting tools, an open adversarial AI framework, internal efforts to reduce prejudicial errors, AI ethics checklists, and a committee (Aether) that advises on AI pursuits. Recently, Microsoft debuted WhiteNoise, toolkit for differential privacy, as well as Fairlearn, which aims to assess AI systems’ fairness and mitigate any observed unfairness issues with algorithms.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS