Editor’s Note: This story is the second in a two-part series on 988. This piece addresses issues around privacy and the sharing of the contents of conversations for AI development. The first part addresses the increase in unwanted or coercive police and EMS interventions after 988’s implementation, and can be found here: Dramatic Rise in Police Interventions on 988 Callers.

People who call, text, or chat to the new national 988 Suicide and Crisis Lifeline are regularly greeted with a generic disclaimer that conversations may occasionally be recorded for quality assurance or training purposes. What callers are not told, however, is that their ensuing “confidential” conversation with a counselor in a time of crises may become part of an ever-growing cache of millions of recordings and transcripts of intimate conversations—and be shared, without caller consent, with researchers, AI developers, and corporations for their own undisclosed purposes in a rapidly growing AI industry.

Privacy experts and users themselves are increasingly concerned—and critical of the lack of transparency from 988Lifeline administrators Vibrant Emotional Health (Vibrant), and of the lack of appropriate oversight from the federal Substance Abuse and Mental Health Services Administration (SAMHSA). Indeed, call-center privacy policies range from sparse or non-existent to contradictory and downright deceptive about what’s actually occurring with 988 conversation data—and Vibrant and SAMHSA are refusing to even publicly discuss the evident problems.

While aggregated data such as numbers of calls, reasons for calls, and demographic information about callers is generally uncontroversial, the recorded or transcribed content of people’s conversations is much more personal.

988 privacy policies—and the dubious lack of them

As part of her work for a forthcoming Trans Lifeline report on crisis hotlines, Liz Latty researched the terms of service and privacy policies for 988Lifeline and many of its participating call centers. Her conclusion about their data sharing practices? “At every turn, they distance themselves from clear, direct, concise, concrete communication on any of this,” said Latty. “There’s a huge, willful effort to obscure anything and everything that they’re doing. It’s not an accident.”

Reviewing more than fifty call center websites randomly for this article, and investigating other sources of information, it was difficult not to reach similar conclusions.

The 988Lifeline involves 200 call centers, and conversation recordings and transcripts are apparently being stored in numerous places—with Vibrant and each call center, and/or with innumerable third-party software platforms that each call center may be using for managing calls, texts, or chats, including iCarol, RingCentral, Coastal, and Vibrant’s own “unified platform.”

Yet many call centers had no visible privacy policies at all, while others were operated by health care companies that outlined their Health Insurance Portability and Accountability Act (HIPAA) privacy obligations but didn’t mention 988 data.

Others had policies covering only website visitors—and even those were misleading. The Markup uncovered that dozens of 988 websites covertly included tracking tools, such as buttons for 988-chat that also sent users’ personal information to Facebook.

Contradictory claims were common. For example, Vibrant promoted 988Lifeline as “a leader in… mental health crisis care” offering “confidential” conversations with trained counselors. However, Vibrant’s less prominent Terms of Service—copied by many 988 call centers—emphasized in large capital letters that talking to a 988 counselor “DOES NOT CONSTITUTE” either “MENTAL HEALTH CARE” or “CONFIDENTIAL” communication.

Electronic Privacy Information Center (EPIC) counsel Chris Frascella called these latter claims a “red flag” that Vibrant wants to avoid HIPAA’s confidentiality requirements for mental health care records. “The fact that they’re saying it suggests that’s what they’re trying to do,” said Frascella.

Chris Frascella

SAMHSA’s 988 FAQ claimed that calls are recorded for internal “quality assurance” and “training” only, and Vibrant’s website stated, “Any information provided by you or collected on you will not be shared or disclosed with any third party.” Yet elsewhere on its website, Vibrant was openly offering access to 988 data to third parties “for research purposes” that could involve secretly listening in on conversations, reviewing masses of recorded conversations, or cross-linking datasets “that could potentially identify individuals who contacted the 988 Lifeline.” The “Crisis Services Policy” for South Carolina 988 also confirmed that “content of conversations” might be shared with “partners, researchers, or third parties.”

988 page on “Access & Requests to Collaborate on Research”

988 Privacy & Security page

In a joint response to Mad in America, SAMHSA and Vibrant provided assurances that access to personally identifiable information of 988 contacts, such as full names and device numbers, is carefully restricted. But both declined to directly answer any of a dozen other submitted questions about 988 privacy policies and Vibrant’s or any call centers’ sharing or selling of audio or text records of conversations.

Some call centers are definitely profiting.

Sharing conversation data—the scandal that’s worsening

In 2022, Politico exposed that the nonprofit Crisis Text Line (CTL) had created Loris, a for-profit arm developing commercial customer-service software, and had shared with Loris millions of transcripts of conversations with people in crises. An FCC Commissioner urged the Federal Trade Commission to investigate whether CTL had obtained proper informed consent from texters.

CTL claimed the conversations were “anonymized.” However, while device numbers can be removed, during such crisis conversations people share many intimate—and potentially identity-revealing—details about themselves and their activities, workplaces, schools, family, and relationships.

Amid public outrage, CTL turned off the tap to Loris and published more transparent policies. Today, though, CTL is a 988 backup center and still shares records of conversations with third parties.

Tim Reierson was the person who originally drew media attention to CTL’s commercial data-sharing. Since then, he’s spent much of his spare time researching and advocating for better privacy practices at crisis hotlines. “It’s a calling, that I’m unable to leave or walk away from,” he said.

Tim Reierson

Reierson loved volunteering at CTL. He was profoundly affected by texters who were struggling through much more oppressive or abusive circumstances than he’s ever lived through. “The situations that I listened to would keep you awake,” he said. “And it did keep me awake.”

At the same time, Reierson became inspired by the power of connection—how a fellow human simply “being there,” like hand holding, could help people find their own way to a better place. Over time, within him grew feelings of respect, responsibility, and even protectiveness towards these people he’d never met.

Then he learned how these same people were secretly being used by CTL. Reierson said he found it “deeply offensive” and “felt sick, physically so ill” about their being exploited as fodder for commercial machine-learning products—such as in eCommerce customer services and AI feedback on “agent effectiveness” at crisis call centers.

Reierson believes the much-touted notion that AI can learn “techniques” of compassionate human connection—and help or replace call-attendants—is “pseudoscience” and dangerous. “That is where a door is open to the loss of humanity,” said Reierson. “Analyzing a transcript, they are just words or letters put together in a certain order… The ‘being there’ is important. The warmth coming through.”

Even more, he criticizes crisis lines’ “betrayal” and exploitation of users. Reierson described circumstances that can make people, including children, desperately reach out for the one-to-one privacy that crisis hotlines promise—such as threats of domestic abuse, feelings of shame, or fears of public exposure. And then 988Lifeline secretly records these difficult, intensely personal, private confessions and shares them with others for their own purposes. “I have no tolerance for deception,” said Reierson.

Since the CTL scandal, though, these practices have actually—if somewhat surreptitiously—expanded and become embedded within 988Lifeline.

Intimate conversations make the richest AI

The for-profit company Protocall runs the New Mexico Crisis and Access Line 988 service (NMCAL) and, in partnership with Lyssn, a for-profit developing AI tools for behavioral health, received $2.1 million from the National Institute of Mental Health in 2023. Protocall and Lyssn are using recordings of 988 calls to create software that will purportedly “evaluate the quality of crisis counseling” and teach call-attendants how to “sharpen their skills.”

When I pointed out to Protocall Privacy Officer Laura Schaefer that NMCAL’s privacy policy didn’t disclose how the company handles 988 conversation data, Schaefer said, “We’ve got to do better at that.” Schaefer said that Protocall’s controls on personally identifiable information abide by HIPAA. However, the company doesn’t believe HIPAA applies to recorded conversations. Nevertheless, Schaefer later emailed that Protocall would add a “disclaimer” about using recorded conversations “for research” and would allow individual callers to request that recording be stopped.

The 988 call center Georgia Crisis and Access Line (GCAL) appeared to be operated by the Georgia Department of Behavioral Health and Developmental Disabilities (GDBHDD). GCAL’s website provided no terms of service or privacy policy. It also wasn’t disclosed that, in fact, Georgia’s 988 system is operated by the for-profit company Behavioral Health Link (BHL)—which I discovered only thanks to a tip from Reierson.

GCAL, GCAL’s Steering Committee, GDBHDD’s public affairs office, BHL, and BHL’s privacy officer all did not respond to many messages over months asking about GCAL’s privacy practices.

However, BHL’s own privacy policy said the company is “not a Covered Entity under HIPPA” (sic), and shares data it gathers with “Business Associates”—including Carelon Behavioral Health, subsidiary of the even larger corporate giant Elevance Health. Carelon also directly operates New Hampshire 988 and has research divisions using crisis conversations in AI tools to help deliver telehealth. BHL described using caller data for developing voice analysis technology to assess the “emotional state” of callers, to “predict potential relapses,” and to link call center data with people’s electronic health records.

Elevance media liaison Tina Gaines confirmed via email that Carelon has access to GCAL “data systems” with which the company is doing unspecified “work.” She did not answer questions about Carelon’s uses of New Hampshire conversation recordings.

BHL’s research and development lead is John Draper, Vibrant’s former 988 director—and so perhaps a barometer for this burgeoning industry. While acknowledging in a recent talk that rampant nonconsensual privacy invasions fueling AI development could “break down public trust,” Draper nevertheless called himself a “huge fan” of Lyssn’s work and expressed enthusiasm about the role that AI can play in the future of crisis care.

In a Report to Congress, SAMHSA said “new technologies” would soon be monitoring calls in “real time” and helping call-attendants flag which callers might be at “imminent risk” of taking their own lives and therefore require police/EMS interventions. Ironically, people who’ve been subjected to such unwanted interventions and have asked for recordings of their calls to mount complaints against 988Lifeline have been told by Vibrant that all recordings are “confidential” and they must obtain court orders to get copies of their own calls. (See part one: Dramatic Rise in Police Interventions on 988 Callers.)

For one of those callers, Elle, learning subsequently about 988Lifeline’s data-sharing practices has been one more way that calling 988 has left her feeling used and abused. “I think it’s vastly inappropriate for them to monetize people’s pain and suffering,” said Elle. “And it feels extremely dystopian, and violative and disgusting.”

Simple Solutions

Is Reierson worried about what he sees in 988Lifeline’s privacy policies?

“It’s what I don’t see,” Reierson answered. “We have a basically unregulated environment. There isn’t any restriction on the secondary uses of those conversations. And I feel that SAMHSA and Vibrant Emotional Health, they both are negligent on this issue.”

Jason Kelly of Electronic Frontier Foundation (EFF) was also critical. “If I’m a caller of a helpline, I don’t expect that data from the call to be used to create an AI product. These [uses] are just totally off bounds, or should be, and should be very clear in any policy that an organization would have.”

Kelly raised questions about the risks of calling 988 for people seeking abortions or supporting a child’s gender transitions, especially if calls or recordings get transferred to states where those practices are now illegal. “It really makes me nervous,” said Kelly. “That data privacy issue really matters.”

Asked about such cross-state scenarios, Vibrant and SAMHSA declined to respond.

There would seem to be a solution: A brief, straightforward, highly visible privacy policy covering all 988 call centers that actually does what 988Lifeline advertises it does—protect the absolute confidentiality of all callers. Routinely delete recorded conversations within a very limited time. And implement a process for gathering clear, informed consent from both callers and call-attendants for their conversations to be shared for any research projects.

EPIC’s Frascella believes that the Federal Trade Commission (FTC) could have cause to weigh in. “If you don’t say something in a privacy policy that you should say, then that is an unfair, deceptive act or practice,” he said. “It’s possible that the FTC might have authority to go after some of these [988] service providers.”

Reierson is especially disturbed by how Vibrant and SAMHSA are characterizing 988Lifeline as not a mental health service—thereby removing public legal protections that exist with health services even as 988Lifeline operates like a health service by providing counseling, keeping records on callers, and instigating police wellness checks and mobile crisis team visits with or without consent. “That makes 988 dangerous,” he said. “If we continue on as we are, these problems of accountability are just going to entrench further.”

10 COMMENTS

  1. “Trust in Haste, Regret at Leisure” (from the movie Brazil)

    The whole notion of ‘confidentiality’ is a farce.

    Think your records are safe? It’s as simple as arranging to have you taken into custody and then your records are public.

    Think your safe from arbitrary detentions? Read Mr Wiponds previous article.

    It’d be interesting to read the questions asked of these people, and their responses (or lack of them)

    Good work exposing the ‘elegant method of overcoming resistance’ (exploitation of trust)

    Report comment

  2. We are afraid of what we call artificial intelligence, but we don’t realize that the intellect itself and it’s socially conditioned thinking IS artificial intelligence. It is merely the socially conditioned repository of memory, or socially sanctioned perceptions. We have this silent and wordless thing called understanding, which is true intelligence, and it comes from perception, which is the mother of all understanding and indeed knowledge. For knowledge to be, it has to be perceived first, and then remembered. The whole of science is based on perception, and if it didn’t have perception it would be reduced to pure thought without any content. These are facts. AI cannot reproduce intelligence. Like the intellect, it is counterfeit intelligence. When thought ends, intelligence is free and so are you.

    Report comment

  3. I want to raid your vegetable garden. I want to eat your cabbages, your mushrooms. I want to demolish your butternut squash and shatter your egg plants into the wall. I want to ravage your radishes. I want to kiss your melons and demolish your tomato out of all existence. My banana and two figs smashing against your avocados forever. It’s the bliss of existence. It’s the bliss of extinction. This is one bliss, not two. Extinction is existence – existence extinction. You have to die completely to know what love is. I should have written this down! But I’m not sure it had anything to do with the question. Have a nice day!

    Empathy is alienated pain, hence pain slain.

    Light is shiny metal. How can you say they are different, actually? The metal is the light – the light, the metal. Two words, one actual. There is only nothingness, light, love, it’s emptiness. Eternal life is dying endlessly, which is bliss. Eternally dying is eternally beginning. The beginningless begings forever.

    Report comment

  4. I would like to know: If I were to demand that my call not be recorded and that my conversation be kept private and confidential, would they(988) have to comply? Or could they promise to comply and still record me?
    Is there any way I could protect myself from being recorded, having my conversation shared with a third party, or having to identify myself?
    Is there any site, whether phone or online, where I could safely go for support?
    I don’t feel safe anywhere right now, except for the one person I trust(and she’s 98), and isolated at home with my cat. I can’t even be completely honest with my psychiatrist. I would only be honest with my adult children if they asked, but they are careful not to. Knowing what trauma I have already been through, they understand why I need privacy and the illusion of sanity.

    Thank you for this extremely valuable information.
    The promise of safety and anonymity is rarely true.

    Report comment

    • The people running 988 have made it clear that they are quite willing to mislead and deceive, so it’s hard to say if you can really trust if they would shut off the recording. If they state in writing that this option is available in their privacy policy, that would seem to be a better assurance, at least. There are 344 crisis hotlines in America that exist outside the 988 system, and some of them do not engage in call tracing or call recording. But it’s an effort to find them. Three I know are Wildflower Alliance’s hotline, Samaritans NYC, and Trans Lifeline. There are others. But generally, it’s vital for us all to develop more ordinary relationships in our lives where we can comfortably talk honestly and openly with each other without calling 911 or 988 on each other!

      Report comment

      • Thank you Rob.
        I did have one good experience with a support line in California that was for suicide survivors. It was called “Wings” — but that was 20 years ago. They were very helpful after my husband and youngest brother committed suicide a few months apart in 2002. I was a total wreck.
        I’m only slightly better now.

        Report comment

  5. Scary! This is some excellent journalistic work bring published here. MIA deserves to have a very prominent place anywhere that psychiatry and mental health are discussed. I believe this reporting does a great job advocating for everyone and anyone who comes into contact with our mental health system.

    Also obvious BS that “988 is not mental health treatment” and completely unacceptable that they are getting away with ignoring HIPAA. That is legal maneuvering no more no less.

    Report comment

  6. On March 2, 2023, the FTC issued a proposed order banning BetterHelp from sharing consumers’ health data with third parties. The order also requires BetterHelp to pay $7.8 million to consumers to settle allegations of revealing consumers’ sensitive data with Facebook, Snapchat, and others. The FTC complaint tied to the proposed order alleges that BetterHelp collected health status and histories, IP addresses, and email addresses from consumers while making repeated promises to keep this information private. The complaint summarizes that “From 2013 to December 2020, however, [BetterHelp] continually broke these privacy promises, monetizing consumers’ health information to target them and others with advertisements for the Service.”

    I guess it becomes a matter of doing the math and working out if it is worth violating the confidentiality of people, and paying the rather small fine if you should happen to get caught. 8 million might be small change?

    Report comment

  7. I don’t know why you’re even surprised about this. As Americans, we have an expectation of a right to privacy. That was the basis for allowing access to abortion in Roe v. Wade. Well….

    We’ve been handing over our privacy all along. The corporations and our government never stopped their surveillance after Edward Snowden exposed what our government was doing and the extension of how much they were doing it.

    They fantasy that we are not spied on, through our electronic devices, internet, social media, even our cameras and verbal communications on our Smartphone. There is data gathered when using security systems, and Alexa. if there’s a way to make money off of us by tailoring their marketing or selling our personal thoughts, so be it. The government is not going to say no.

    On a recent visit to my doctor there was a person accompanying him. The doctor asked if I would consent to have the visit recorder for AI teaching programs. I promptly replied “no”. Anybody should have the good sense to say no.

    I am very more concerned about the people creating AI. These people would not understand normal human communication if it bit them on the arse. They know a lot about computers and programming but lack any common sense. Most remain in their “clique” of fellow programmers and rarely go out and interact in society. Few know the complexities of life outside the bubble.

    I doubt that they leave Silicon Valley to go to San Francisco’s poorest neighborhood, the Tenderloin District. Ironically this district is almost across the street from Google, Uber, etc. What AI observations are they learning from homeless people, impoverished people, drug addiction, the resulting brain damage and true mental health issues. IA is only hearing from a few societal few. There is no one IA as a one size fits all. That’s not the human experience. Just as all earlier inventions or technology was shaped by the person who created causing things like racism. Early CCTV was made by white people but didn’t do well with melatonin. Subsequently, black people were being arbitrarily detained in London because facial recognition didn’t work to properly identify black people. I guess CCTV thought all black people look alike.

    Furthermore, the creators of AI have no long term understanding of the efficacy of their creation. As most in this industry they are far more driven to advance the greatness of technology without stopping for a second to think about the consequences of their creations. We know now all the harms of social media. Now. From my very limited understanding of Mark Zuckerberg, his “FaceBook” baby was more important than the people who did more to help create it.

    When we look at other people involved with huge “technological” breakthroughs they’re not really that great. Elon Musk’s questionable running of Twitter. Sam Bankman-Firied and his glorious Cryptocurrency. All the people in Silcon Valley had his crypto and when that collapsed there was a run on Silicon Valley Bank causing it to be closed. The list goes on Elizabeth Holmes?

    As long as the legislature fails to codify Citizen’s United disastrous Suprene Court decision allowing huge corporate campaign donations, we don’t stand a chance. We are being mined. Our data is being stored everywhere in any algorithm. Our crises, our human moments of great vulnerability can be used against us. We are mentally dissected at every turn. Our health information on our watches are sold. Every personal private search we might make to get mental health treatment captured. Our subscribing to Mad In America can make us a target.

    There is no privacy in this country. I spoke to an attorney who handles privacy in technology issues. He knows it too. If you want your right to privacy you can move to where there’s less “free speech”.

    Our mental health surveillance. Our most scary times, when we are on the brink of ending our lives. When we subscribe to Mad In America. Should we join Benzo Buddies. That not only be confidential it shouldn’t be fodder for computer programmers to think that this is a “fun” new way to use AI. At some point the human experience must remain a private human experience.

    Written not proofed

    Report comment

  8. I wonder if some standard questions for these organisations could be developed and posted. I noticed that they simply refer to confusing and misleading “policy Documents” if you ask an open question on social media. Digging into the documents it becomes clear that they are doing the same thing described above……. for example creating the appearance that they are abiding by legislation which doesn’t actually apply (HIPPA)

    So what should be asked from these people to ensure confidentiality? And will they simply delete any posts which expose their little but of deception?

    Report comment

LEAVE A REPLY