Is ChatGPT Safe for Healthcare?

July 3, 2023

// By Wendy Margolin //


Four healthcare leaders discuss opportunities, risks, and possible pitfalls.

Boston Children’s recently posted a job description for an AI prompt engineer to join its Innovation and Digital Health Accelerator. The ideal candidate needs a “strong background in AI/ML, data science, and natural language processing, as well as experience in healthcare research and operations.”

This came only months after the November 2022 release of ChatGPT, which alerted the masses to the potential uses (and risks) of large language models (LLMs).


Ahava Leibtag, president of Aha Media Group

Even in a sector slow to embrace new digital technology that could potentially compromise patient privacy, AI’s promise to reshape healthcare has many cautiously testing the waters.

In a recent webinar from eHealthcare Strategy & Trends, four industry leaders discussed use cases and potential hazards of artificial intelligence that could dramatically alter work — and even the world as we know it.

How Do Large Language Models Like ChatGPT Work?

Simply put, LLMs apply different numbers to large bodies of context in a process called embedding. “They turn words into zeros and ones and then apply complex math to make predictions. But the outcome is only as good as the input,” says Ahava Leibtag, president of Aha Media Group.

How to Be Good Stewards of AI in Healthcare

Chris Hemphill

Chris Hemphill, senior director of commercial intelligence at Woebot Health

This process allows the LLM to predict the next words based on the input. “If somebody inputs a string of text, predicting the next string of applicable numbers translates into applying deeper amounts of context in new and exciting ways,” says Chris Hemphill, senior director of commercial intelligence at Woebot Health.

That output is tested in real time in a data labeling process where humans are asked whether they like or dislike the response.

This can feel like magic to the end user, but it’s not without risk.

Leibtag likens LLMs to Greek mythology’s Prometheus, the Titan god of fire. Prometheus stole fire from the gods to give to human beings without recognizing its ability to destroy. “Like fire, this has incredibly destructive power, as well as exciting potential,” she says.

New Technology Offers Numerous Benefits to Healthcare Marketers


Chris Pace, chief digital marketing officer at Banner Health

As healthcare leaders and employees experiment with LLMs, the future of healthcare can seem more like science fiction than reality. “You could imagine that one day you feed your genome, sleep habits, and exercise habits to produce a personalized healthcare plan. This will help doctors move away from being data clerks and back to becoming healers and practitioners,” says Leibtag.

For Chris Pace, chief digital marketing officer at Banner Health, the productivity potential for those working in healthcare marketing is massive. “We have a pretty lean team full of strategists and experts. For us, it’s the people, process, and technology that can make it all work together.”

Finding ways to alleviate the workload on overwhelmed marcom teams is key, says Alan Shoebridge, associate vice president of national communication at Providence. “We’ve been doing a lot the last three years, and burnout is high. Where can this take the edge off?”

Alan Shoebridge

Alan Shoebridge, associate vice president of national communication at Providence

Shoebridge’s team has experimented with using ChatGPT as a starting point for communications like press releases, meta descriptions, and summaries. “It’s not going to be the finished product we send out, but it lowers the burden of getting started,” he says.

Risks Still Far Outweigh the Benefits

The unknown journey down the artificial intelligence path poses risks for everyone, with some AI leaders — including ChatGPT’s CEO Sam Altman — warning about the extinction of all humanity. Treading carefully is especially important for the highly regulated healthcare industry, where legal, privacy, and cybersecurity intersect.

Five Major Risks AI Poses for Healthcare

  1. Biases: Hemphill warns that the potential for implicit bias is even more dangerous in healthcare than in other industries: “There are big and real consequences if we’re not paying attention to things like systemic racism, gender bias, and other sociological factors that impact how these algorithms are made and how we ultimately employ them.”
  2. Misinformation: Social media has proven to be fertile ground for healthcare misinformation and even disinformation — especially during the pandemic. Healthcare has the mandate to do no harm, but these models are only as accurate as the training input. This means bad actors can wreak havoc.

In May, an AI-generated fake photo of a Pentagon explosion was posted and shared on Twitter, reverberating in the stock market. “We must be stewards of this technology. We have to use it in smart ways. It’s not ready for prime time,” says Leibtag.

  1. Accuracy: In the pandemic especially, many institutions earned a reputation as an authority in the healthcare space. After weathering the biggest disinformation wave to strike the industry, one false claim that gets a lot of publicity could risk everything. “We earned our spot at the table, and to go backward from there just to take the easy path is a slippery slope,” says Pace.
  2. Content Quality: The ease of producing massive amounts of content now means everyone can become a publisher. And while AI can regurgitate complete blogs on limitless topics, the style tends to be cookie-cutter. Copyright and originality are significant risks. “How will you stand out in the marketplace now that LLMs produce all this synthetic content? What are you going to do that’s custom?” asks Leibtag.
  3. Privacy: Inputting confidential patient information into an LLM like ChatGPT is a HIPAA violation — even without ever publishing that content. That’s because that data set can be used to build future content. “No protected information should ever be entered into these systems right now,” says Hemphill. “If you wouldn’t say it on Facebook or Twitter, it shouldn’t go in.”

AI Will Soon Be Everywhere

Many healthcare institutions block social media use by employees, but avoiding LLMs will likely prove impossible. The tools will simply become ubiquitous. They are being integrated into the technology suite of products everyone uses, such as Microsoft, Google Drive, and Adobe. “Everything you use to do your work is going to start incorporating this more,” says Shoebridge.

The productivity potential of LLMs alone is reason enough to experiment with the technology in ways that still avoid the pitfalls. Shoebridge likens ChatGPT to an intern. “It’s like a workforce extender. It’s not replacing people but allowing you to do more,” he says.

Because empathy and compassion apply to healthcare more than other industries, the webinar panelists offer advice worth heeding. “Lives and livelihoods are at stake based on how we approach these technologies,” warns Hemphill.

In the same way the healthcare industry has adopted any new technology, moving slowly lowers the risks. “It’s okay to be optimistic, but we’re all learning. And when you’re in learning mode, you need guardrails,” says Leibtag.

Leibtag recommends a “trust but verify” approach to using LLMs. Use the tools when you can, then research to ensure it’s right.

After all, she warns, “If you take it to create a piece of published content, you’re burning the house down. If you use it to get seven headlines and edit one to sound like your brand, that’s lighting a candle to make the house smell nice.”

Watch the full webinar here.

Chat GPT webinar screenshot

As owner of Sparkr Marketing, Wendy Margolin helps busy healthcare marketing communications teams create more content. She’s on a mission to build a better medical web, one article at a time. Her favorite form of content is hospital brand journalism, which ties together her 20-year career in journalism, marketing, and healthcare.