“With AI,” Cisco’s Joseph Bradley said, “we’re at a crossroads for a new kind of moral compass of human equality, at a level literally of the civil rights movement. Because when you think of the number of people that AI can impact and the speed at which it drives decisions, you understand how important it is for us to get it right.”

That importance will only grow, as artificial intelligence and machine learning take over key decisions in everything from enterprises and public safety to battlefields and operating rooms.  

Depending on how it’s developed and deployed, AI can support a future that’s inclusive, sustainable, and rife with opportunity for even for the most disadvantaged in society. Or it can widen the divide, eliminating jobs and basing key decisions on biased programming. 

“The Ironman analogy is my favorite for AI,” added Bradley, who is Cisco’s global vice president of IoT, blockchain, AI, and incubation businesses. “It’s not that AI’s replacing us. But it brings out the best of who we are. When you put on that Ironman exoskeleton, you’re doing great things. To me, AI is about enhancing our ability to get more from life.”

But as Bradley and others insist, for AI to support that more inclusive future demands wisdom, caution, and collective will. Starting with a hard look at who’s programming it and what it’s being used for.

“These are immensely powerful tools that will genuinely change the way work is done and the world operates,” said Vivienne Ming, an AI expert and founder of Socos Labs. “But we have a small army of incredibly smart, relatively young, almost overwhelmingly white male AI experts being churned out by universities that have spent their entire very brief careers learning how to build ever more sophisticated hammers. But they’ve never built a house.” 

Ming insists she’s not engaged in a blame game. “Nobody’s the bad person in this story,” she clarified. But as AI takes over key decisions, she argues that leaders in business, government and beyond must develop a more critical view of AI. 

“Profoundly important problems,” she said, “are being handed over to these young men. Who gets a loan? Who goes to prison? Who gets into the country? Who gets a job? And they’re nice, and they’re smart and they’re earnest, but they don’t know what it means to actually solve problems in the world. They only know how to build a deep neural network.”

A ‘thinking’, ‘feeling’ machine?

Kay Firth-Butterfield, head of artificial intelligence and machine learning for the World Economic Forum, stresses that unconscious bias in AI can undermine even the best intentions. That bias can originate in the data itself. 

“For example, in some criminal sentencing algorithms,” she said, “there have been great biases brought forward from previous criminal history against African-Americans. So what AI does by using that historic data is to simply codify and extend our human biases.”

In the enterprise, AI is already having an impact in HR. And depending on how it is adopted, it can either help or hinder inclusion and diversity. 

“In hiring decisions,” Firth-Butterfield said, “AI is being used to observe people during the hiring process, so that it can be used to make judgements. That’s different from emotional intelligence. What it’s doing is understanding what we are thinking and therefore reacting accordingly.”

But can actual emotional intelligence be programed into AI? And how can it promote diversity in the enterprise? 

Joseph Bradley cites examples where AI’s ability to read emotions can be used positively. 

“There’s applications of AI that are being used to help folks with autism to understand the emotional quotient and what’s happening emotionally around them,” he said

He’s also encouraged by the machine learning programs Cisco has adopted in its hiring process. 

“Our chief inclusion and diversity officer did a great job investing in diverse talent accelerators, DTAs,” Bradley explained. “They use analytics and machine learning and a bit of data science to help us ensure that we are identifying the best places to find diverse talent.”

It’s just a hint of what AI can do when used in mindful ways. Ming has used AI to help treat bipolar disorder and diabetes, as well as autism. And she’s developed educational programs that combine machine learning and cognitive neuroscience to maximize students’ potential — whatever their situation. 

AI, through the lens of human potential 

Ming’s own personal journey informs her perspective on AI and inclusion. A transgender woman who was once a homeless college dropout, she is passionate about using AI to ensure that great talent isn’t cast aside. 

Technology, she believes, should always be viewed through the lens of human potential: to bring success, meaning, and purpose to people’s lives. For her, that must include the vast numbers who never developed their talents because they suffer from disabilities or were born into American ghettos, Brazilian favelas, or African villages. 

“It’s not simply that I want everyone to be happy or successful,” Ming insisted, “but that the person who would have found the cure for the disease that kills your child never grew up to be that person. That we as a society will never benefit from who they truly could have been. To me, that’s terrifying because for me, it’s so real.”

Ming believes that anytime technology helps create a better person, it’s a gift to the world. And that this goal should inform all tech initiatives. 

“It certainly produces a different perspective on why we found companies,” she said, “or invent technologies, much less the specifics of machine-learning algorithms.”

New skills for new jobs

As all three experts emphasized, AI should augment humans, rather than just replace them. And studies have shown that in the long run, more opportunities will be created than lost. 

“I think that we all agree there will be more jobs,” said Firth-Butterfield, “and there will be new jobs that we can’t think of at the moment.”

“There will be new jobs that we can’t think of at the moment. But there is likely to be a dip between now and those new jobs starting up. And it’s those people in that dip, maybe in the next two to five years, that we really need to be cognizant to ensure that we help them.” — Kay Firth-Butterfield

To be sure, retraining for jobs that may not exist today is a challenge. And the transition will drive upheavals in society, and hardships for many. So how we prepare is crucial. 

“There is likely to be a dip between now and those new jobs starting up,” Firth-Butterfield added. “And it’s those people in that dip, maybe in the next two to five years, that we really need to be cognizant to ensure that we help them, because we know what happens when you leave people unemployed and feeling that they have been let down and fired.”

Cisco, for example, has brought its Networking Academy training programs into prisons, halfway houses, and juvenile halls, in an effort to extend tech training to overlooked populations.  

At the same time, in a hyper-technological society, human skills will reign. With machines handling rote tasks and calculations faster than any human brain, creativity, collaboration, and empathy will be highly desired skills. And that has big implications for education and job training. 

“We are moving to a world where all the answers are known,” Bradley said. “Value, therefore, will be found in knowing what questions to ask. You are going to see a huge rise in humanities. You need people to be able to ask the different questions of these systems. This is why we think about diversity of programmers … you need people that can think about the social, racial, and other implications of these things.”

The FAIR way to adopt AI 

To simplify the process of adopting AI ethically, Joseph Bradley has distilled it into a simple-to-remember acronym — FAIR — based around four core questions: 

Is it Fundamentally Sound? — To eliminate bias, you don’t need to be a data scientist, but you do need to understand the data you’re using. Does it adhere to fundamental principles of data mining by ensuring that your data is representative of the population on which you plan to use the model?

Is it Assessable? — Can you explain it? Do you know why the black box made this particular decision? If you can’t understand it, and you tell yourself it’s too complicated, then that means you probably shouldn’t do it.  

Is it Inclusive? — Can you determine whether the program excludes customers from your products or services on the basis of race, age, gender, or other discriminatory factors?
 

Is it Reversible? — “Do unto others as you would have them do unto you.” If the situation were reversed and you were in your customers’ shoes, how would you feel about how this AI model is treating you?

What’s ethical is what’s smart

In the business world, those companies that harness AI to source, support, and grow talent will win. Those that simply replace workers will fall behind, since the technology will be available to all. In the end, human creativity will distinguish leaders from laggards, because the best products and services will offer an emotional connection with users — even if they’re connecting with a bot. 

“Most companies will need to get onboard with the AI revolution,” said Firth-Butterfield. “If they don’t they could find themselves redundant in their space. But they should do it ethically, and ethics is something that is really core to AI.”

What’s ethical is also what’s smart. And in an AI-dominated world, maximizing human potential is the differentiator. 

“The big scary story of artificial intelligence and the future of work,” Ming said, “is that uncertainty is a fundamental of our future. But the silver lining is that in this world of increasing technology and automation, we need to be all the more human — and support that in your workforce.”

“This will be hard,” she warned. “This will be change. But the people that figure it out will be the ones that come out the other side as the leaders in every single industry.”

On the other side of that journey is a better, highly inclusive world, Bradley believes. If we make wise choices today. 

“The value of AI is, can we drive and connect the full power of the human spirit with every human being?” he said. “Can we extract that ability?” 

“AI is a hugely powerful tool,” Bradley concluded, “We just have to be strong enough and smart enough to recognize that this is so transformational, it’s so important, that let’s make sure that we ask the right questions so that we get it right.”

Used with the permission of  http://thenetwork.cisco.com