Kaspersky calls for responsible AI application, setting out principles for ethical development, use of AI in cybersecurity

Kaspersky has presented its ethical principles for the development and use of systems employing artificial intelligence (AI) or machine learning (ML) today, reinforcing its commitment to a transparent and responsible approach toward technology development. As AI algorithms play an increasingly prominent role in cybersecurity, the principles set out in Kaspersky’s whitepaper explain how the company ensures its AI-driven technologies are reliable and provides guidance to other industry players on mitigating the risks associated with the use of AI/ML algorithms. The relevant discussion was initiated by Kaspersky as part of the UN Internet Governance Forum, currently taking place in Japan, bringing together world leading experts responsible for internet governance.

Kaspersky has been using ML algorithms, which are a subset of AI, in its solutions for close on to 20 years. Combining the power of artificial intelligence and human expertise has enabled Kaspersky solutions to effectively detect and counter a variety of new threats every day, with ML playing an important role in automating threat detection and anomaly recognition, and enhancing the accuracy of malware identification. To help drive innovation, Kaspersky has formulated ethical principles for the development and use of AI/ML, and is openly sharing them with the industry to build impetus for a multilateral dialogue to ensure AI is used to make the world a better place.

According to Kaspersky, the seamless development and use of AI/ML should take into consideration the following six principles:

  • Transparency;
  • Safety;
  • Human control;
  • Privacy;
  • Commitment to cybersecurity purposes;
  • Openness to a dialogue.

The transparency principle stands for Kaspersky’s firm belief that companies should inform their customers about the use of AI/ML technologies in its products and services. At Kaspersky, we comply with this principle by developing AI/ML systems that are interpretable to the maximum extent possible and by sharing information about the way our solutions operate and use AI/ML technologies with our stakeholders.

Safety considerations are reflected in a wide range of rigorous measures that Kaspersky implements to ensure the quality of its AI/ML systems. Some of these include security audits specific to AI/ML, steps to minimize dependence on third-party datasets in the process of training AI-driven solutions, and favoring cloud-based ML technologies with the necessary safeguards instead of the models installed on clients’ machines.

The importance of human control is explained by the need to calibrate the work of AI/ML systems when it comes to the analysis of complex threats, in particular, Advanced Persistent Threats (APTs). To provide effective protection against ever-evolving threats, Kaspersky is committed to maintaining human control as an essential element of all its AI/ML systems.

Another crucial principle is ensuring the right to privacy in the ethical use of AI/ML. With big data playing a vital role in the process of training such systems, companies working with AI/ML must take the privacy of individuals into account comprehensively. Committed to respecting the rights of individuals to privacy, Kaspersky applies a number of technical and organizational measures to protect data and systems, and ensures its users’ rights to privacy are meaningfully exercised.

The fifth ethical principle represents Kaspersky’s commitment to utilizing AI/ML systems solely for defensive purposes. By focusing exclusively on defensive technologies, the company is pursuing its mission to build a safer world and demonstrates its commitment to protect users and their data.

Finally, the last principle refers to Kaspersky’s openness to dialogue with all stakeholders in order to share best practice in the ethical use of AI. In this regard, Kaspersky stands ready for discussions with all interested parties, as the company’s stance is that it is only through ongoing collaboration among all stakeholders that we can overcome obstacles, drive innovation and open new horizons.

Kaspersky CTO Anton Ivanov commented:Artificial intelligence has the potential to bring many benefits to the cybersecurity industry, further enhancing the cyber resilience of our society. But, as with any other technology at an early stage of its development, artificial intelligence isn’t risk-free. To address concerns surrounding AI, Kaspersky has released its ethical principles to share its own best practice on AI application and make a call for an open industry-wide dialogue to develop clear guidelines on what considerations the development of AI and ML-driven solutions should take into account to be deemed ethical.”

Kaspersky presented its ethical principles as part of the UN-led Internet Governance Forum, taking place in Kyoto, Japan, from October 8-12. With AI & Emerging technologies being one of the key topics in this year’s event, Kaspersky organized a workshop to discuss the ethical principles of AI development and use, and brought technical and legal considerations to the discussion.

The release of the ethical principles is a continuation of Kaspersky’s Global Transparency Initiative, promoting the principles of transparency and accountability among technology providers for the sake of a more resilient and cybersafe world. To learn more about the initiative and the company’s transparency principles, request a visit to a Kaspersky Transparency Center.

LEAVE A REPLY

Please enter your comment!
Please enter your name here