“Conversations” report by Capgemini Research Institute

Capgemini’s “Conversations” report highlights the influence industry leaders have on implementing ethical AI practices and regulations

Special edition from the Capgemini Research Institute includes views from leaders across industries on:

  • Why organizations need to take a human-centered approach to building ethical and transparent artificial intelligence (AI) systems
  • Understanding the importance of principles, standards, and regulations in creating digital ethics
  • How organizations can leverage the power of ethical and transparent AI for successful business transformation and to outpace competition

The Capgemini Research Institute today announced the release of “Conversations”, a special report offering critical insights from industry leaders on a range of ethical questions that the proliferation of artificial intelligence (AI) has unleashed. The report includes a diverse range of perspectives on tackling the issues of ethics and transparency in AI, and the role of guidelines and regulations in this space.

This publication draws on a recently released global survey conducted by the Capgemini Research Institute on “Why addressing ethical questions in AI will benefit organizations”. It probes issues that organizations are today facing with the quickening pace of technological advancements that are outdoing current ethical frameworks, and provides recommendations from experts on addressing the challenges.


“AI is set to radically change the way organizations manage their businesses, and is a revolutionary technology that will change the world we live in. The interviews with leaders and practitioners for this new report emphasized its far-reaching implications, and how there is a need to infuse ethics into the design of AI algorithms. They also placed immense importance on the need to make AI transparent, and understandable, in order to build greater trust,” said Jerome Buvat, Global Head of the Capgemini Research Institute.

In a world where AI is expected to become so advanced that it takes certain business decisions, the report interviews highlight how it is important to not abdicate responsibility, especially if a decision raises ethical concerns. Taking a human centered approach for building ethical and transparent AI is also a key consideration.

Saskia Steinacker, Global Head of Digital Transformation at Bayer, has played a key role in developing the company’s digital agenda with a focus on new business models to accelerate growth. She notes, “Our goal in healthcare is not to let AI take decisions, but to help doctors make better decisions. AI has its strengths – analyzing huge amounts of data and generating insights that a human being wouldn’t have thought of before. It is able to identify certain patterns, such as radiological images, and supports the diagnosis of a doctor. AI is meant to enhance or augment the capabilities of humans.”

While guidelines and regulations give society reassurance and increase consumer trust in new technologies, the report highlights that there is a need to balance legislation with self-regulation to avoid stifling innovation. Paul Cobban, Chief Data and Transformation Officer at DBS, a multinational banking and financial services group headquartered in Singapore, noted, “Regulations implemented with the right balance, and in the best interest of all the involved parties, are imperative in driving optimum results. Companies have to think about the balance between the rights of the individual and the rights of businesses. The other challenge around regulation is that in an increasingly connected world, regulations in one part of the world differ from those in other parts. Regulators have a duty to collaborate among themselves and have some kind of baseline approach to this.”

Lanny Cohen, Chief Innovation Officer at Capgemini said, “AI adoption is no longer a choice – it’s a must and will soon be ubiquitous. Implementing AI correctly is instrumental to long-term business planning and sustainable growth. However, AI needs to be applied with an ethical and responsible approach – one that is transparent to users and customers, embeds privacy, ensures fairness, and builds trust. AI implementations should be unbiased and open to disclosure and explanation.”

For this report, the Capgemini Research Institute interviewed a range of experts and practitioners from various industries, including insurance, banking, pharmaceutical and life sciences; leading academic experts from Harvard, Oxford, and the MIT and the director of an industry association called DigitalEurope.

A copy of the report can be downloaded here.