HomeTech PlusTECH & OTHER NEWSHow The Trevor Project uses AI to help LGBTQ+ youth and train...

How The Trevor Project uses AI to help LGBTQ+ youth and train its counselors

The Trevor Project, a nonprofit organization focused on ending suicide among LGBTQ+ youth, is using artificial intelligence to better meet its mission. Over the last couple of years, the organization has used natural language processing (NLP) to expedite and improve its volunteer training and to triage incoming calls. The group’s AI team has dealt with common language issues in unique ways and is currently working to significantly expand its volunteer training capabilities and offer its services to more youth around the world. Throughout, the organization has developed strategies to carefully ensure fairness, safety, and accountability in its AI models and their applications.

Founded in 1998, The Trevor Project operates 24/7, connecting young members of the U.S. LGBTQ+ community with trained counselors over the phone, through text messages, and via instant messages. The organization also hosts Trevorspace, a monitored social network for LGBTQ+ youth from around the world where they can meet friends with similar interests and find answers to tough questions. Recent estimates suggest that over 1.8 million LGBTQ+ youth in the United States between the ages of 13 and 24 consider suicide each year.

Beginning the AI journey

According to VP of technology John Callery, The Trevor Project has spent the past three years rebuilding its infrastructure for crisis services. During this process, team members began to notice areas of innovation that traditional software engineering and decision trees couldn’t meet. “Our ability to prioritize users with the highest risk of suicide coming into our services was really challenging, and we couldn’t really do that with keywords in a way that was actually effective and meaningful,” he said. “So, this kind of got us to really think about how […] we actually go about solving these problems.”

Last year, with this idea in mind, The Trevor Project applied for Google’s AI Impact Challenge and was selected as one of 20 finalists from 2,602 applications. Google granted The Trevor Project $1.5 million and a team of Google Fellows to help the organization problem-solve with AI. “And from there, Google kind of flipped up the heat on how to set goals, how to approach responsible AI, [and] how to productionize AI,” said Callery.

The Trevor Project is particularly concerned with responsibility and intersectionality: Because every minute matters in crisis prevention and the young individuals it supports each undergo unique, difficult challenges, each interaction from the organization must be both prompt and sensitive. Its triage model is specifically designed to optimize these interactions.

When a young person in crisis enters TrevorChat, for example, they’re asked to volunteer any details, in a few words or sentences, about how they’re feeling. They’re also asked to select how upset they are from a drop-down menu of five options, whether they have thoughts of suicide, and whether they had attempted suicide before. The intake form provides an opportunity for the user to share their sexual orientation, gender identity, and ethnicity if they wish to, as well. While the two questions related to suicide are mandatory, the rest are optional for the user. Within a few minutes, the user is then connected with a trained counselor. (TrevorText’s questions over confidential, 1:1 text messaging are similar, but they don’t ask for sexual, gender, or ethnic identification.)

Based on clients’ intake responses, The Trevor Project uses an ALBERT natural language processing model to classify and predict a young person’s clinical suicide risk level in the moments before they connect to a counselor and triages cases accordingly. The diversity of the people who seek out The Trevor Project’s help, particularly when it comes to the intersectionality of people’s identities and demographics, presents a challenge. “Part of that is making sure that we’re giving equal access to examples that come from all of the backgrounds that the young people who deserve the Trevor Project’s care come from,” said Daniel Fichter, head of AI and engineering at The Trevor Project.

Ethical AI

Given The Trevor Project’s vulnerable clientele, it was gravely important for the organization to ensure fairness in its AI. “In many instances, that includes the reduction of systemic racial and gender bias, and for organizations like The Trevor Project, it also means saving more lives than we ever thought possible,” said Kendra Gaunt, whose title at The Trevor Project is “Data and AI Product Owner,” in materials sent to VentureBeat. Gaunt joined The Trevor Project to work with the group’s AI and engineering team alongside Google’s Fellows.

“For some organizations using AI, you know, fairness comes up in the context of avoiding bad outcomes,” said Callery. “And I think just because of the work we’re doing, [and] because of the reason why we’re using technology in the first place, and the mission that we’re trying to accomplish with it, AI is a positive thing, and fairness — we have to think of [it] foremost in a constructive way, a positive way. It’s a question of making sure that what we’re building serves the young people of all demographic backgrounds, and all intersectional backgrounds, really well.”

Gaunt said that The Trevor Project spent months working to develop guiding principles and AI models that avoid reinforcing biases that impact people based on factors like their race or intersectionality, and she broke down the steps into five distinct parts: mitigate data bias, protect privacy, consider the diversity of end users, rely on domain expertise, and evaluate fairness.

She offers pragmatic advice for those seeking to reduce the bias in their data. “Define the problem and goals up front. Doing so in advance will inform the model’s training formula and can help your system stay as objective as possible,” she said. “Without predetermined problems and goals, your training formula could unintentionally be optimized to produce irrelevant results.”

Respecting privacy involves obvious measures like adhering to the terms and conditions and privacy policies of any data sets you use. But Gaunt also said that it’s important to keep in mind the project’s objectives so you’re aligned with acceptable uses of the data, and she advocated for using differential privacy techniques to remove personally identifiable information (PII). “Teams can also rehearse privacy attacks and perform automated system tests to prevent unwanted behavior and secure private information further,” she added.

The Trevor Project is somewhat unique in that its end users are a very specific group of people, but that narrow group is enormously diverse, comprising myriad experiences, backgrounds, and intersectional identities. To best understand how to serve that group, the organization relied on domain experts — that is, people who understand a given area or population best, even (or especially) if they aren’t tech people.

“It’s important to consult with domain experts and keep abreast of current research to account for the community in which the model will exist and to plan for how humans might respond to interacting with your models,” Gaunt said. “Leaning into individuals that comprise a community for direct feedback also supports building an inclusive model.” She said that The Trevor Project’s tech team worked closely with the Crisis Services team to make sure they adhered to “clinical best practices regarding crisis intervention and suicide prevention.”

“Without that necessary expertise, our implementation of AI could prove counterproductive and cause unintentional harm to the LGBTQ youth we serve,” said Gaunt.

A key final step is evaluating the fairness of an AI model’s output to abate potential negative impacts on the people it’s meant to help. “In addition to defining success metrics that measure system performance from a statistical viewpoint […] and user experience […], models must be measured on fairness throughout development, upon go-live, and beyond,” said Gaunt.

She said that every organization needs to find the best methods (there are many available) for measuring fairness for their particular use cases and demographics. “At The Trevor Project, these measures include but are not limited to: sexual orientation, gender identity, race, and ethnicity, and the intersection of those identities,” she said.

“By establishing monitor and alert systems to track the model’s performance over time, teams can verify that AI tools will work effectively for anyone who uses them. It can also be helpful to host regular working sessions where experts from other teams can review AI performance, fairness, evaluate the model for biases, and course-correct as needed,” she added.

Accelerating training

Before they can help kids in crisis, The Trevor Project’s volunteer counselors need training. Because of the time and resources that training requires, this has long been a bottleneck to the group’s mission.

Counselors learn how to speak with and understand young people who represent vastly different communities. One way is through familiarizing themselves with the precise ways that members of Generation Z express themselves with emojis, non-traditional punctuation, and all uppercase or all lowercase letters. This knowledge allows counselors to then better connect with and care for young people in their most difficult moments. But now, The Trevor Project is using generative language models to significantly accelerate new volunteer training in a tool called Conversation Simulator. “A really core formative part of helping a counselor get ready for their first interaction with young people on a shift is having some of those ‘real’ experiences,” said Fichter. The AI models are trained on the same unique linguistic quirks and nuances, which gives the trainees a more realistic experience when they practice against an AI.

Fitcher said that another benefit of using AI for training is that counselors can go through the emotional experience of crisis communication on their own, without the gravity of having a real person on the other end of the line. “The AI can really sort of become a window into hearts that are struggling with feelings, like those inside the real people that a counselor is eventually going to have a heart-to-heart with,” said Fitcher.

These volunteer counselors are rapidly growing in number. Two years ago, The Trevor Project could train about 30 counselors each quarter. Now, with online, asynchronous programming, the group says it’s been able to train over 100 counselors each month. Counselors can learn on their own time, and, if they desire, at an accelerated pace.

More intelligent next steps

AI has catalyzed The Trevor Project’s growth in both its depth and scale. In addition to expanding within the U.S., The Trevor Project plans to support LGBTQ+ youth internationally.

From a technical perspective, this is easier said than done, because adding a new language to an existing model is so difficult. In addition to the entirely new vocabulary and syntactic differences between many languages, there are idiomatic concerns. For example, offering services in Spanish requires The Trevor Project to deal with a new set of slang terms alongside a new set of quirks and foibles within text messages.

Expanding internationally raises concerns about providing services in parts of the world where LGBTQ+ people face increased marginalization and have more restricted rights. TrevorSpace has been banned in some countries already, and one of the tasks on The Trevor Project’s multiyear roadmap is figuring out more ways to provide accessible and anonymized resources in those places.

The Trevor Project has been concerned with incorporating anonymity and data privacy into its services all along. This shows up even in minor details like removing branding from TrevorSpace’s chat room, in case a client needs to be discreet for safety reasons, like if someone may see what they’re doing over their shoulder. Additionally, each conversation between a counselor and a young person is treated as an isolated interaction. This approach allows people to get help where they are emotionally, in their particular crisis, without any knowledge The Trevor Project may have from prior meetings.

According to Callery and Fichter, The Trevor Project is currently expanding its machine learning and software engineering teams and exploring partnerships with university research programs to further improve its quality of care.


Best practices for a successful AI Center of Excellence: A guide for both CoEs and business units Access here


By VentureBeat Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS