People have long debated what constitutes the ethical use of technology. But with the rise of artificial intelligence, the discussion has intensified as it’s now algorithms not humans that are making decisions about how technology is applied. In June 2020, I had a chance to speak with Paula Goldman, Chief Ethical and Humane Use Officer for Salesforce about how companies can develop technology, specifically AI, with ethical use and privacy in mind.
I spoke with Goldman during Salesforce’s TrailheaDX 2020 virtual developer conference, but we didn’t have a chance to air the interview then. I’m glad to bring it to you now, as the discussion about ethics and technology has only intensified as companies and governments around the world use new technologies to address the COVID-19 pandemic. The following is a transcript of the interview, edited for readability.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
Bill Detwiler: So let’s get right to it. As companies, as governments are developing apps for contact tracing, to help people know maybe who they’ve been in contact with, to help them know where it’s safe to go or where it’s safer to go, and as people are concerned about their data being used appropriately by these apps, what do companies need to consider, and hopefully the governments who are going to be using these apps, need to consider when they’re building the apps from the get go? How do they ensure people that, look, we’re going to use this data in the right way and not for the wrong reasons, either now or maybe in the future?
Paula Goldman: Well, these are all the right questions to be asking, Bill. And it’s a very strange time in the world, right? There’s multiple intersecting crises. There’s a pandemic, there’s a racial justice crisis, there’s so much going on, and technology can play a really important role, but it has to be done responsibly. And we really believe that when you design technology responsibly, it’s going to have better adoption. It’s going to be more effective. And that’s part of why we started at the Office of Ethical and Humane Use at Salesforce back in 2018. We knew that companies have a responsibility to think through the consequences of their technology in the world.
And we started a program that we call Ethics by Design, which is precisely for this purpose. It was think through the unintended consequences of product, work across our technology teams to make sure that we’re maximizing the positive impact. And so we did not predict COVID-19, I will say, but I think when the moment hit, we were very well prepared to operationalize with our product teams and make sure that what we’re designing is responsible and that we’re empowering our customers when they collect data to do so in the most responsible way. Does it make sense for me to go into that?
SEE: How to create a privacy policy that protects your company and your customers (TechRepublic)
Principles for ethical product development
Bill Detwiler: Completely. Yeah. I mean, I think when you and I spoke at Dreamforce last year, I mean, it seems like a lifetime ago, but it was just last fall. You and I were talking about AI, and we were talking about building ethics into AI, some of the unintended consequences that the systems like that can have. So maybe I’d love to hear kind of what’s happened since then that set you up to be able to address this moment. What have you learned, not just from 2018, but from Dreamforce, from the fall last year, that made it easier for you to sort of have that ethical framework now, to have already have been working with the other teams that are building these apps, to working with your customers that are using these apps, to help them do so in an ethical way?
Paula Goldman: Yeah, absolutely. Well, when we last talked, we were talking about all of these different methods that we were working on with our technology teams and our product teams. So we were essentially looking at risk framework, so that you’re building a product. What are the areas of risk you should look out for most, and screen for? We talked about training for our teams and we talked about designing features with ethics top of mind.
Well, it turns out all of these things are especially important during a crisis, because COVID-19 is a very unusual circumstance. Think about technology. Technology has helped businesses stay afloat in an economic crisis. You mentioned contact tracing. Technology helps what has previously been like a pencil and paper exercise, will help public health officials speed up that process so that they’re able, in some cases, to save lives. So technology can play a really, really important role, but on the other hand, the data is very sensitive. And so these apps need to be designed very thoughtfully.
And that’s why when this crisis hit, our team worked in partnership. The Office of Ethical and Humane Use worked in partnership with the privacy team and came up with a joint set of principles. We released them internally in April and externally in May, and then operationalized them across all of our products. And basically, it was like, how do you, even when no one has all the answers, most of us have never lived through a pandemic, and certainly have never lived through a pandemic with access to the type of technology that we do now, even with all of that uncertainty, how can we make sure that the technology that we’re developing is trusted and will be effective? So that’s what we did. And I’m happy to go through some of the principles.
SEE: AI and ethics: The debate that needs to be had
Bill Detwiler: Yeah. I’d love to hear those. Yeah. I’d love to hear the principles. Yeah.
Paula Goldman: Yeah, absolutely. So there’s five principles, and I’m happy to go through any or all of them, but the top of them, the most important one, we always at Salesforce, we rank order list and it’s always in order of priority. So the top one is human rights and equality. Equality, as you probably know, is a core value for Salesforce. And especially when you think about the COVID-19 crisis disproportionately affecting communities of color, already marginalized populations, we really want to make sure that the solutions that we develop are developed inclusively and with those populations top of mind, with and for those populations.
And so we’re actively involving diverse experts, medical professionals, health professionals, our Ethical Use Advisory Council, external communities. And we’re looking out for ways in which products could be unintentionally misused. So let me give you an example of a safeguard that we built in to Work.com. So Work.com, which we released recently, has a feature that allows employers to schedule shifts for people coming back to the office. And this is a tricky challenge that most employers haven’t had to face before. Employees need to have space between them. You need to monitor the capacity on a floor. And so there’s a tool in there called Shift Manager, and the Shift Manager allows a person to schedule employees for shifts. Sounds simple.
One of the things that we very intentionally did in that product is we made sure that all of the potential employees who could come back for shifts were being treated equally in the technology itself, so the default, you don’t want bias creeping into the way that some employees get selected, and others not, to come back for shifts. You want it to be treated equally. It’s stuff like that that we’ve worked with our product teams to be very thoughtful and intentional when we’re designing.
See also: Artificial intelligence ethics policy (TechRepublic)
No one will use technology they don’t trust
Bill Detwiler: And you know, you hit on something I think is really important there, which is the sensitivity of the information a lot of times that we’re dealing with, because we’re dealing with health information, you’re dealing with location information, you’re dealing with contact association, who you’re with information. And so people have, I think, at least a lot of people that I talk to, have a natural skepticism and a hesitation to share that information, if they think it could be misused, or if they think it can be exposed. And we’ve heard reports of the contact tracers in some of the states not getting accurate or full information from people, even that they’re having a conversation with, because it is such sensitive information. So I guess, do you think that having an ethical framework helps build trust in people that then allows them to provide or gives them the comfort they need to provide this information, by saying we’re not going to use it in an inappropriate way?
I guess it seems that ethics is a, you know, some would say it’s a nice to have. It’s good to have ethics, but we’re more concerned about collecting as much data as we can, building quickly, developing, being on the cutting edge of innovation, getting things out quickly, then we’ll worry about the kind of ethics later. It seems like when you’re using technology to solve a problem, and you need buy in, and you need participation, because there’s no way necessarily to get this data without asking people or without people trusting you. I mean, you can try to force it on people, but for a public health event like this, it doesn’t seem like that would really work very well. So how does ethics play into at least helping ease people’s fears that the information could be misused, that the systems that they’re using are safe, accurate, and will benefit them, I guess, in the long run?
Paula Goldman: Well, I think you said it perfectly, Bill, and the way I would summarize it is that it doesn’t matter how good a piece of technology is. No one will use it unless they trust it. And particularly now, particularly when we’re talking about health data, and these are very sensitive topics. And so two of our other principles that we’ve operationalized in our product, one is about honoring transparency. And so for example, we want to help our customers share, if they are collecting data, how and why is that data being collected, and how it’s being used, and make it as easy as possible, transparently, to explain that. So features in Work.com, for example, Shift Management, we discussed. Wellness Check, which an employer can give a survey to their employee just to make sure that they’re not experiencing COVID symptoms before they come back into the office.
But those come with out of the box, email templates that help customers communicate very transparently with users, employees, why are they collecting this data? What it’s being used for, what it’s not being used for, who has access to it, who doesn’t have access to it? We have to go out of our way in a crisis to be transparent and explain why actions are being taken. And I think we’ve tried to templatize that for our own customers and make it as easy as possible.
Another one that’s really important here is about minimizing data collection. Again, as you were saying, I think especially now, and especially in a crisis, the principle has to be only collect data that is absolutely essential for a solution to be effective and make sure that when you’re doing it, you’re safeguarding the privacy of the individual who is consenting to give that data. But so another example here in Work.com, for the wellness status, when employees are asked a survey on their health, the shift managers that are scheduling don’t get to see the specific wellness status. It’s either they’re ready to come back to work, or they’re not, because there’s no need for a shift manager to associate specific symptoms with an individual.
And so these are all the seemingly small design choices that add up to products that people can trust. And I think it’s those types of choices at all levels that matter so much right now.
SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)
Bill Detwiler: I think that’s a key point, too. You mentioned Shift Manager and the two examples you gave are clear ways that you can design systems to not promote what could be, you know, either discriminatory or behavior or release data inappropriately. How do you, I mean, how do you develop those individual, the design choices? How do you make those design choices? How is the process by which you decide, we don’t think that the individual manager, the line managers, should have this data? If you aren’t consciously thinking about that, then you would probably just say, well, oh, well you have this field on a form, this person says, “Nope, I’m well,” and so everyone up the chain should just be able to see all the information on the form.
So someone had to think that, no, we don’t want to do that. Just describe the process internally about how you go about doing that, because I think any company that’s building apps in house or designing processes in house would benefit from having someone there to look for those, maybe those points where you have unintentional bias creep into the system.
Paula Goldman: Absolutely. Yeah. And we’ve released these principles publicly. They’re on our newsroom and they’re on the Ethical and Humane Use website for Salesforce. So anybody that’s interested, I’d really encourage you to check them out. And there’s like decks that sort of explains some of the thinking that we went through, but really it’s about education. So we released the principles and then we had very detailed workshops with each of the product teams that were working on different features on Work.com, and went through with a fine tooth comb.
So for example, the wellness survey, the first layer was, do we really need to be asking all of these questions? So eliminate the questions that don’t need to be in here. That’s step one. And then step two is can we aggregate the questions, so it’s like a single yes, no. Because again, you don’t need an administrator associating symptoms with a specific employee. It’s those types of detailed processes that really have made it work.
But what I’ll say is you also want to create a culture where people are asking these questions, and you want to be empowering everyone to be asking the right questions, because when you’re thoughtful like that, that’s what results in the products that get the most adoption, and most importantly, the products that are going to help save lives. In a crisis like this, trust is so paramount.
Incorporating ethics into the development process
Bill Detwiler: Another thing I think that I wanted to touch on and I wanted to talk to you about is the way you’re building safeguards into the systems that your customers are going to use. And I’ve talked to people before around AI. And I remember I was out at TDX last year, and we were talking about Einstein AI and using that in some financial systems. That was kind of a new thing at the point in time. And we were talking about how these were tools that companies could use to make sure that their use of Einstein AI was ethical, or followed principles and they didn’t allow unintentional bias. And at that point we were talking financial systems. And so my question was, well, okay, so how are we going to prevent maybe an unintended use of the technology by a customer to, say, perpetuate at that time we were talking about kind of red line.
And it was like, okay, so it wasn’t the intent of the system to be used this way, but once it gets out in the open, it could be. And I think your example with Work.com and the Shift Manager, the first one we were talking about, where you’re saying, okay, we wanted to build safeguards so that you couldn’t play favorites when you’re scheduling shifts or you couldn’t either intentionally or unintentionally allow people to use that system to discriminate against certain populations within their workforce. So how do you do that? What type of thinking goes, how do you, I guess, operationalize that? You can say this is our philosophy. We want to make sure that our tools can’t be used in this way. Talk a little bit about the operationalization that makes that happen.
Paula Goldman: I guess I’d summarize it as three things. So one is what do we decide to build in the first place? And we are pretty intentional about the products we choose to build and their overall use cases. And we choose to build products that we think are going to improve the world. So that’s, I think at a very basic level, but that’s a real starting point. Second, we also have policies that we set around how customers use our product, and I encourage people, it’s publicly available. We have something called an acceptable use policy. You can just Google it, Salesforce accessible use policy. Check out, you’re talking about AI. You can check out the Einstein section of the policy, and there’s some really interesting, very forward-leaning pieces of that.
For example, there’s a policy that says, if you’re using an Einstein bot, you can’t use it in such a way that deceives the user into thinking they’re interacting with a human being. There’s a lot of very thoughtful, intentionally designed policies on that. But I would say the most important thing is the kind of examples that we were discussing, is that we intentionally design our products so that it’s as easy as possible to do the right thing and doing the wrong thing is extremely hard, if not impossible. And those are the types of examples that we were going through, and it’s an exercise that we do with rigor across all of our products.
And I want to be humble here. We’re still learning. Nobody has the magic wand for responsible technology, and we’re continuing to learn. We’re continuing to work in partnership with our customers and our community to keep improving what we’re doing here, but we’re also very proud of where we are and eager to keep sharing and growing.
Bill Detwiler: Do you ever get pushback, I guess from either other groups, not inside Salesforce, or customers, or outside of Salesforce, or just other companies that are like, “Well, yeah, we like this feature, but we want to customize it this way?” Or, “Yeah. We really like that. We like what you’re trying to do, but again, we go back to that kind of it’s getting in the way of innovation or progress,” or “Yes, we really kind of need that.” And if so, how do you address that? What’s your argument to skeptics to say that an ethical framework when you’re designing applications, when you’re designing products or processes, is not a nice to have. It really is essential.
Paula Goldman: I think we’ve been very lucky. We’ve been very lucky. Salesforce has been a values driven company from the get go. Leadership has always been aligned with this philosophy of making the right long term decisions about what we do and where we engage. And I think in part that’s part of what draws customers to us and part of our relationship, our promise with our own community. So I wouldn’t say there’s been a lot of pushback in that regard about the responsible design of technology, but it does require taking a long term approach. It does require believing that trust is our most important value, which it is, and that this all accrues to trust, and that when there are, we’ve seen, unfortunately in the news over the last few years, the sort of the so called tech lash, that when trust is broken, it’s very hard to repair. And that is why we pay so much attention to these issues, and it’s why we go out of our way to take a listening approach to it, as well, so that we keep growing and learning and keep doing a better and better job at it.
RELATED COVERAGE
How ML and AI will transform business intelligence and analytics
Machine learning and artificial intelligence advances in five areas will ease data prep, discovery, analysis, prediction, and data-driven decision making.
Report: Artificial intelligence is creating jobs, generating economic gains
New study from Deloitte shows that early adopters of cognitive technologies are positive about their current and future role.
AI and jobs: Where humans are better than algorithms, and vice versa
It’s easy to get caught up in the doom-and-gloom predictions about artificial intelligence wiping out millions of jobs. Here’s a reality check.
How artificial intelligence is unleashing a new type of cybercrime (TechRepublic)
Rather than hiding behind a mask to rob a bank, criminals are now hiding behind artificial intelligence to do their attack. However, financial institutions can use AI as well to combat these crimes.
ZDNET’S MONDAY MORNING OPENER
The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet’s global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and North America.