Artificial intelligence is being applied to everything from recommending TV shows and detecting credit card fraud to forecasting weather and predicting disease.

But along with those benefits come responsibilities.

As AI handles more and more of our personal data, consumers are asking important questions. Where is AI being applied? Is it making biased decisions? And how can organizations be sure that they are using AI ethically?

Cisco employs AI for a wide range of use cases. These include predicting network outages, combing vast troves of data for security threats, and in Webex, to power features like real-time translation.

Cisco is also a leader in using AI ethically and responsibly, thanks to its policies regarding data privacy and human rights.

We spoke with Anurag Dhingra, Chief Technology Officer for Cisco Collaboration and executive sponsor of a new initiative — Cisco’s Responsible Artificial Intelligence Framework. Dhingra shared his insights on how AI can reach its full potential — to make our lives simpler, better, and more connected — without compromising ethical standards.

Thank you, Anurag! AI is already quite pervasive and bringing us many benefits. But what are some of the challenges that remain?

AI is starting to make a very tangible difference, in making life better for everyone. And AI is emerging at a time when many consumers have grown used to accessing cloud-based services via the device of their choice at the time of their choosing. They are often unaware of AI-powered features, and will assume a service “just works,” and that the companies offering those features have the best intentions for their audiences.

However, consumers, businesses and governments are now becoming increasingly aware of the importance of data privacy and security. As the industry matures, I think it’s possible that end users will become increasingly confused about when AI is being used and how it is working. Transparency is becoming a top-of-mind concern.

The potential pervasiveness of AI-based technologies is becoming a challenge for the industry and tech providers. For example, there’s a chance that implicit and explicit human biases are encoded in a system, and there’s a potential that these biases get amplified in unforeseen ways. AI is now a technology requiring rigorous scrutiny and oversight.

Cisco recently released two major works of thought leadership, the 2022 Privacy Benchmark Study and the Cisco Responsible AI Framework. Let’s start with the study. What are some of the key findings around AI that emerged?

One thing that came out very clearly is that privacy is now a business imperative. Almost 90 percent of respondents stated that they would not buy anything from a company that does not take a privacy-first approach to their data. A large number of respondents also don’t understand what companies are collecting about them and how they’re using that data. And that generates a lot of hesitancy when it comes to newer technologies like AI.

Interestingly, several organizations who responded to the survey claim that they have processes that apply to AI in accordance with customer expectations. But users really don’t agree. Because 56 percent of respondents say that they don’t understand how companies apply AI. So, there’s a gap between what organizations think they’re doing and what the customers are perceiving.

Which brings us to Cisco’s Principals for Responsible Artificial intelligence. How are we setting a new standard for ethical AI?

I’m the executive sponsor of this initiative, and I’m proud to work with a great cross-functional team to define these principles. We recognized that we needed a framework to provide clear guidance to product teams, throughout the lifecycle of the product. How they should think about these questions and concerns and how they could mitigate some of these challenges, especially around bias and lack of transparency in AI.

What are the fundamental principles that your team settled on?

There are six core principles. I’ll give you a quick summary of each:

  • Transparency is all about explaining when AI is in use and how it is making decisions.
  • Fairness focuses on promoting inclusivity and mitigating bias and discrimination in AI systems.
  • Accountability is about taking the responsibility that comes with building these systems, making sure that they operate according to their intended use, and then helping prevent unintended use. And then standing behind the systems that you’re building.
  • Privacy is all about collecting and using personal data with consent, but also applying that to proportional and fair AI use cases. It’s a new lens on privacy as it applies to AI/ML systems
  • Security is obviously mitigating threats, like traditional systems do. But it’s also looking at threat vectors that are unique to AI and machine learning and designing to protect against new kinds of misuse.
  • And finally, reliability is all about making sure that the AI systems are producing results consistently and that the consistency of intent and operation is maintained under varying conditions.

To focus on just one principle, Cisco takes transparency very seriously. How are we are adopting that specific principle?

With transparency, we have a very good resource in the Cisco Trust Center that our customers and users have come to rely on. You can find information about our use of AI, especially when it’s used to make any consequential decisions. You can also find information around our collection and usage of data and how we keep privacy at the forefront. And all Cisco products go through this very comprehensive review. We publish that in a very transparent manner on the trust center.

How do you feel other organizations will respond to these guidelines?

I believe that most organizations want to do the right thing. And I think they will, over time, adopt frameworks like the one we’ve just published. Just like privacy has become a business imperative, I think a responsible use of AI will as well. Some of our industry peers are also investing in frameworks like ours, and Industry standards for AI are going to emerge. And government and legislative activity will also nudge organizations to adopt these types of frameworks. So, I’m very optimistic.

How can Cisco continue to expand its leadership role around AI?

AI technology moves very, very fast. So, we’ve designed the framework to be adaptable, and we will continue to iterate on it. Second, we are engaging with our peers in the industry and looking forward to contributing to emerging standards in this space. We are also engaging with our government affairs team to understand how governments are thinking about this. And then, finally, we are investing in research, working in concert with universities. Because we want to further the state of the art.

How can AI, if used ethically, help us realize Cisco’s core goal of powering an inclusive future for all?

I’m very proud of our company’s mission. And I think it is a great, overarching message for us to model our initiatives on. We see how our technologies like Webex, with AI-powered features like real-time translation, are helping connect the world today and ensuring that everyone can participate in the global economy, regardless of where they live and what language they speak, or what cultural background they come from.

That’s just one example. We believe that our technology, with the use of AI, is really augmenting our mission for building an inclusive future for all. I’m proud to be leading a team that is defining the principles that will help us navigate through this new AI world.

Used with the permission of https://newsroom.cisco.com