HomeTech PlusTECH & OTHER NEWSWho will your algorithm harm next? Why businesses need to start thinking...

Who will your algorithm harm next? Why businesses need to start thinking about evil AI now

From Google’s commitment to never pursue AI applications that might cause harm, to Microsoft’s “AI principles”, through IBM’s defense of fairness and transparency in all algorithmic matters: big tech is promoting an responsible AI agenda, and it seems companies large and small are following the lead.

Statistics speak for themselves. While in 2019, a mere 5% of organizations had come up with an ethics charter that framed how AI systems should be developed and used, the proportion jumped to 45% in 2020. Key words such as “human agency”, “governance”, “accountability” or “non-discrimination” are becoming central components of many companies’ AI values. The concept of responsible technology, it would seem, is slowly making its way from the conference room and into the boardroom.

This renewed interest in ethics, despite the topic’s complex and often abstract dimensions, has been largely motivated by various pushes from both governments and citizens to regulate the use of algorithms. But according to Steve Mills, leader in machine learning and artificial intelligence at Boston Consulting Group (BCG), there are many ways that responsible AI might actually play out in businesses’ favor, too.

“The last 20 years of research have shown us that companies that embrace corporate purposes and values improve long-term profitability,” Mills tells ZDNet. “Customers want to be associated with brands that have strong values, and this is no different. It’s a real chance to build a relationship of trust with customers.”

The challenge is sizeable. Looking over the past few years, it seems that carefully drafted AI principles have not stopped algorithms from bringing reputational damage to high-profile companies. Facebook’s advertising algorithm, for example, has repeatedly been criticized for its targeting, after it was found that the AI system disproportionately showed ads about credit cards and loans to men, while women were presented with employment and housing ads.

Similarly, Apple and Goldman Sachs recently came under fire after complaints that that women were offered lower Apple Card credit card limits than men, while a health company’s algorithm which aimed to work out who would benefit the most from additional care was found to have favoured white patients.

These examples shouldn’t discourage companies who are willing to invest in AI, argues Mills. “A lot of executives view responsible AI as a risk mitigation,” he says. “They are motivated by fear of reputational damage. But that’s not the right way to look at it. The right way to look at it, is as a big opportunity for brand differentiation, customer loyalty, and ultimately, long-term financial benefits.”

According to recent research by consulting firm Capgemini, close to half of customers encouragingly report trusting AI-enabled interactions with organizations – but expect those AI systems to explain their decisions clearly, and organizations to be accountable if the algorithms go wrong.

For Lasana Harris, a researcher in experimental psychology at University College London (UCL), the way that a company publicly presents its algorithmic goals and values is key to earning the favors of customers. Being wary of the practices of for-profit companies is a by-default position for many people, he explained during a recent webinar; and the intrusive potential that AI tools carry means that businesses should double down on ethics to reassure their customers.

“Most people think that for-profit companies are trying to exploit you, and the common perception of AI tends to stem from that,” says Harris. “People fear that the AI will be used to troll their data, invade their privacy, or get too close to them.”

“It’s about the goals of the company,” he continued. “If the customer perceives good intentions from the company, then the AI will be seen in a positive light. So, you have to make sure that your company’s goals are aligned with your customers’ interests.”

It’s not only customers that businesses can win over with strong AI values and practices. The past few years have also seen growing awareness among those who create the algorithms in the first place, with software developers voicing concern that they are bearing the brunt of responsibility for unethical technology. If programmers are not entirely convinced that employers will use their inventions responsibly, they might quit. In the worst-case scenario, they might even make a documentary out of it.

There is practically no big tech player that hasn’t experienced some form of developer dissent in the past five years. Google employees, for instance, rebelled against the search giant when it was revealed that the company was providing the Pentagon with object-recognition technology to use in military drones. After some of the protesters decided to quit, Google abandoned the contract.

The same year, a group of Amazon employees wrote to Jeff Bezos to ask him to stop the sale of facial-recognition software to the police. More recently, software engineer Seth Vargo pulled one of his personal projects off GitHub after he found that one of the companies using it had signed a contract with the US Immigrations and Customs Enforcement (ICE).

Programmers don’t want their algorithms to be put to harmful use, and the best talent will be drawn to employers who have set up appropriate safeguards to make sure that their AI systems remain ethical. “Tech workers are very concerned about the ethical implications of the work they’re doing,” says Mills. “Focusing on that issue will be really important if you want, as a company, to attract and retain the digital talent that’s so critical right now.”

From a “nice to have”, therefore, tech ethics could become a competitive advantage; and judging by the recent multiplication of ethical charters, most companies get the concept. Unfortunately, drafting press releases and company-wide emails won’t cut it, explains Mills. Bridging the gap between theory and practice is easier said than done.

Ethics, sure – but how?

Capgemini’s research called organizations’ progress in the field of ethics “underwhelming”, marked by patchy action-taking. Only half of organizations, for example, have appointed a leader who is responsible for the ethics of AI systems.

Mills draws a similar conclusion. “We’ve seen that there are a lot of principles in place, but very few changes happening in how AI systems are actually built,” he says. “There is growing awareness, but companies don’t know how to act. It feels like a big, thorny issue, and they kind of know they need to do something, but they’re not sure what.”

There are fortunately examples of good behavior. Mills recommends following Salesforce’s practices, which can be traced back to 2018, when the company created an AI service for CRM called Einstein. Before the end of the year, the company had defined a series of AI principles, created an office of ethical and humane use, and appointed both a chief ethical and humane officer, and an architect of ethical artificial intelligence practice.

In fact, one of the first steps to take for any ethics-aspiring CIO, is to hire and empower a fellow leader who will drive responsible AI across the organization, and be given a seat at the table next to the company’s most senior leaders. “An internal champion such as a chief AI ethics officer should be appointed to act as head of any responsible AI initiative,” Detlef Nauck, the head of AI and data science research at BT Global, tells ZDNet.

Nauck adds that the role should require an employee specifically trained in AI ethics, to work across the business and throughout the product’s lifecycle, anticipating the unintended consequences of AI systems and discussing these issues with leadership.

It is also key to make sure that employees understand the organization’s values, for example by communicating ethical principles via mandatory training sessions. “Sessions should train employees on how to uphold the organization’s AI ethical commitments, as well as ask the critical questions needed to spot potential ethical issues, such as whether an AI application might lead to exclusion of groups of people or cause social or environmental harm,” says Nauck.

Training must come with practical tools to test new products throughout their lifecycle. Salesforce, for example, has created a “consequence scanning” tool that asks participants to imagine the unintended outcomes that a new feature they are working on could have, and how to mitigate them.

The company also has a dedicated board that gauges, from prototype to production, whether teams are removing bias in training data. According to the company, this is how Einstein’s marketing team was once able to successfully remove biased ad targeting.

Mills mentions similar practices at Boston Consulting Group. The company has created a simple web-based tool, that comes in the form of a yes-or-no questionnaire that teams can use for any project they are working on. Adapted from BCS’s ethical principles, the tool can help flag risks on an ongoing basis.

“Teams can use the questionnaire from the first stage of the project and all the way towards deployment,” says Mills. “As they go along, the number of questions increases, and it becomes more of a conversation with the team. It gives them an opportunity to step back and think of the implications of their work, and the potential risks.”

Ethics are ultimately about infusing a state of mind among teams, therefore, and don’t require sophisticated technology or expensive tools. At the same time, the concept of responsible AI isn’t going anywhere; in fact, it is only likely to become more of a priority. Giving the topic some thought now, therefore, might end up being key to staying ahead of the competition.

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS