Join Transform 2021 this July 12-16. Register for the AI event of the year.
The ability to automate decisions is becoming essential for enterprises that deal in industries where mission-critical processes involve many variables. For example, in the financial sector, assessing the risk of even a single transaction can become infinitely complex. But while the utility of AI-powered, automated decision-making systems is undeniable, utility often plays second fiddle to transparency. Automated decision-making systems can be hard to interpret in practice, particularly when they integrate with other AI systems.
In search of a solution, researchers at Red Hat developed the TrustyAI Explainability Toolkit, a library leveraging techniques for explaining automated decision-making systems. Part of Kogito, Red Hat’s cloud-native business automation framework, TrustyAI enriches AI model execution information through algorithms while extracting, collecting, and publishing metadata for auditing and compliance.
TrustyAI arrived in Kogito last summer but was released as a standalone open source package this week.
Transparency with TrustyAI
As the development team behind TrustyAI explains in a whitepaper, the toolkit can introspect black-box AI decision-making models to describe predictions and outcomes by looking at a “feature importance” chart. The chart orders a model’s inputs by the most important ones for the decision-making process, which can help determine whether a model is biased, the team says.
TrustyAI offers a dashboard, called Audit UI, that targets business users or auditors, where each automated decision-making workload is recorded and can be analyzed at a later date. For individual workloads, the toolkit makes it possible to access the inputs, the outcomes the model produced, and a detailed explanation of every one of them. Monitoring dashboards are generated based on model information so users can keep track of business aspects and have an aggregated view of decision behaviors.
TrustyAI’s runtime monitoring also allows for business and operational metrics to be displayed in a Grafana dashboard. Moreover, the toolkit can monitor operational aspects to keep track of the health of the automated decision-making system.
“Within TrustyAI, [we combine] machine learning models and decision logic to enrich automated decisions by including predictive analytics. By monitoring the outcome of decision making, we can audit systems to ensure they … meet regulations,” Rebecca Whitworth, part of the TrustyAI initiative at Red Hat, wrote in a blog post. “We can also trace these results through the system to help with a global overview of the decisions and predictions made. TrustyAI [relies] on the combination of these two standards to ensure trusted automated decision making.”
Transparency is an aspect of so-called responsible AI, which also benefits enterprises. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and punish those that don’t. The study suggests companies that don’t approach the issue thoughtfully can incur both reputational risk and a direct hit to their bottom line.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more