ChatGPT: Is It Safe to Use for Your Healthcare Communications?

A new eHealthcare Strategy & Trends webinar for healthcare marketers and strategists

Presented May 24, 2023

Are you up to speed on ChatGPT? Is your organization counting on you to be the expert? Explore the opportunities and threats with our panel of industry leaders — and stay ahead of the curve.


This webinar is free for members of eHealthcare Strategy & Trends


Not a member yet?

Start your trial membership.

Sign up for a free 7-day trial
to watch this webinar
and download the slides
now. 

Plus, get access to all member resources.

Already a member? 
Log in to watch. 


Your Presenters:
Ahava Leibtag, President and Owner Aha Media Group
Chris Hemphill, Senior Director, Commercial Intelligence Woebot Health
Chris Pace, Chief Digital Marketing Officer Banner Health
Alan Shoebridge, Associate Vice President for National Communication Providence

Presenter4square2


Sponsored by Aha Media Group 
This event is free to attend thanks to our sponsor.
aha media

Would you put someone you cared about in a self-driving car and let it take them for a spin? Maybe someday — but today, you’d be putting their life in the hands of technology that isn’t ready for prime time.

ChatGPT is the latest technology craze, even newer and more unvetted than driverless cars. With its blazing fast ability to generate content, ChatGPT has captured our imaginations while surfacing our biggest fears: Can it replace us? Will human writers become obsolete?

For healthcare content marketers, the big question is, should you trust ChatGPT to write your content? And if so, could your patients trust the ChatGPT-generated information they find on your website? Today there are more questions than answers.

Watch this lively panel discussion with top industry experts who will share how their organizations are using ChatGPT — or NOT using it, and why.

You’ll learn:

  • Who’s using ChatGPT, and what they’re using it for.
  • Why you might want to think twice about having ChatGPT write your healthcare content.
    What’s coming next with ChatGPT.
  • How search engines view AI-generated content.
  • Why learning to write prompts is a new key skill every communicator should master.
  • Whether ChatGPT makes more work or less work for content writers.
  • What to consider when using ChatGPT to write headlines, Tweets, Top 10 lists - and much more!

About your Presenters


Ahava Leibtag, a 2020 inductee into the Healthcare Internet Hall of Fame as an Innovative Individual, has 20+ years of experience in content. She has consulted with some of the world’s largest firms to attract and grow their audiences. Ahava is the president and owner of Aha Media Group, LLC, a copywriting, content strategy and content marketing consultancy. She is also the author of The Digital Crown: Winning at Content on the Web and loves a great logic puzzle, a long game of Apples to Apples and anything that has chocolate.

Leibtag-Ahava-Aha-Media



Chris Hemphill is director of Commercial Intelligence at Woebot Health, a role that combines data science with AI strategy for health systems and insurers. He works with healthcare leaders to drive ethical and effective decisions with AI and algorithms. Hemphill’s work in data ethics initiatives led to collaborations between Chicago Booth’s Center for Applied AI and reduced racial and gender bias in algorithms. Hemphill is currently a McSilver Fellow-In-Residence. Their current focus is on engagement arcs and uncovering unseen population health needs in the digital mental health sector.

Chris Hemphill



Chris Pace is a marketing leader with over 20 years devoted to the health care industry. He is currently the Chief Digital Marketing Officer at Banner Health, the largest hospital system and employer in the state of Arizona. His responsibilities include driving the content strategy, service line marketing strategy, and website development strategy. Since joining Banner Health in 2018, Chris’ leadership has pushed Banner Health to expand its digital footprint through a comprehensive digital marketing stack. BannerHealth.com is now a top 10 most visited health care industry website.

Christopher_Pace



Alan Shoebridge is the Associate Vice President for National Communication at Providence. He leads a diverse, multi-state communication team responsible for internal communication, public relations, issues management, labor relations and DE&I initiatives. Alan has also held senior marketing and communication leadership roles at Kaiser Permanente and Salinas Valley Memorial Healthcare System. Alan currently serves on the board of the Society for Healthcare Strategy and Market Development (SHSMD), a membership group of the American Hospital Association that specializes in marketing, communication, business development and strategic planning.

Alan Shoebridge

Transcript

0:00
Welcome to today's webinar, ChatGPT, Is it safe to use for your health care communications?
0:06
I'm Jane Weber Brubaker, Executive Editor, Plain English Healthcare, were the publishers of Strategic Health Care Marketing and E healthcare strategy trends and producers of the healthcare leadership awards.
0:17
There's a reason the title of this webinar ends with a question mark.
0:20
Generative AI came on our radar late last year, like a freight train with the launch of chat GPT, and right now, there are more questions than answers.
0:29
Is it a good thing, or a bad thing, or both?
0:32
We'll take our jobs. Where we will help us be more productive today. Our panel of experts will share their insights and opinions on generative AI and the implications for our industry.
0:43
We'll have them introduce themselves shortly, but before that, to get a sense of your experience with generative AI, we have a poll question Just a heads up on once we launch it, it'll take over the screen until we close it.
0:55
So, Ryan will go ahead and launch the poll.
1:03
All right. The question is: what is your level of usage of ... and other generative AI models. Please choose the option that best reflects your experience, and we'll share those results in a few minutes. While you're doing that, and the results are being tallied, I'll go over some of the housekeeping details. Today's one hour session is being recorded and you'll receive a link to view the recording in a couple of days, once it's been processed.
1:26
If you've been to our webinars before, you know that we usually hold questions until the end.
1:30
For this session, we will be monitoring your questions throughout the panel discussion, and we hope to respond to as many questions as possible, but please be aware that over 700 people registered for this webinar, and we may not get to your particular question.
1:44
So, we'll give you a few more seconds to respond to the poll, and then I'll introduce today's panelists.
1:50
So, let's see.
1:53
About 81%.
1:59
Just leave it going for you seconds.
2:05
All right, going once, going twice. OK, Ryan, would you please take down the poll?
2:13
Thank you. And we'll share the results in a few minutes. So, let's get started with introductions. A hover wanted to go first. Hi, everybody. My name is a Ahava Leibtag.. I'm the President of AHA Media Group, we're a content marketing consultancy. We do copywriting, content strategy consulting, as well as content training.
2:31
I'm really excited to be here. Thank you so much for having us.
2:35
Thank you. Ellen?
2:38
Hi, Alan Shoebridge. I'm the Associate Vice President of National Communication of Providence. We're Southern State Healthcare System, mostly based on the West Coast, with over 120,000 employees and 51 hospitals.
2:50
I'm Chris Pace.
2:53
Hello, everybody. I'm Chris Pace. I'm the senior director, chief digital marketing officer at Banner Health, and we're a system based out of Phoenix, Arizona.
3:03
Our footprint six states, 32 hospitals, 55,000 employees and and, and from primary care, all the way up to tertiary and academics.
3:17
Thank you. So, to really, really good system, so, Chris, simple.
3:21
Hey, everybody, thankful to be here. I'm Chris Hemphill. I'm with Robot Health.
3:27
Within Robot, I work in data science understanding, conversation flows, and pathways that people have with our application, which is a mental health assistant that uses AI and other AI and cognitive cognitive behavioral therapy to help with mental health challenges and moment of need. There's currently about one point four million users of a robot. And I am just really excited to get into some of the promises from these generative approaches, as well as the things that we should be looking out for, and be cautious of as we approach this from a healthcare perspective.
4:13
OK, so, so, Chris, let's just continue on. I know you've been working in AI for several years now.
4:20
Can you start us off by just giving us an overview of these large language models, like Chet GBT, when actually they are and how do they work?
4:27
Yeah. Yeah. So I'm excited about that. And I've got to say it is crazy out there like using constant headlines, YouTube videos, tech talks, I don't know what your social media is, but I can guarantee it's probably chockfull of various AI influencers and things like that.
4:46
But switching into our World and Health Care, there's big news coming out of Kaiser Permanente, Epic and and other major healthcare entities. You hear people like doctor Eric Topol bringing in that kind of perspective to it and that's what I'll want to keep the audience focused on today.
5:03
Open AI's approach to commercializing language model has captured this broad interests. And, personally, I've been saying for a while now, like, it's exciting to hear healthcare be interested in newer technology.
5:16
And, for awhile, I've just been saying that, that we should look inwardly, and not necessarily focus on catching up with other industries.
5:25
But on, like, we have the opportunity, We, there's, there's empathy and compassion that apply in healthcare like they do in no other industry. So, where we should be thinking is not what other folks are doing, but what they're not even capable of because of the charters that we have to serve our patients.
5:44
I like to say that that lives and livelihoods are at stake based on how we approach these technologies. And the things that come on down the road.
5:51
We're not Netflix, we're not Nordstrom, there's there's big and real consequences if we're not paying attention to things like systemic racism, gender bias, and other sociological factors, that, up, that impact, how these algorithms are made and how we ultimately employed.
6:07
So part of having that higher standard means understanding what it is we're dealing with so that we know how and when to apply it.
6:15
So just to level set, I like to get all the way down to the very basics on this. This generative AI concept, and I'll talk about large language models really quickly.
6:25
So and again, you might hear people refer to all this stuff, his LLM, so, all the stuff that's happening right now is actually much more than just large language models, and I'm going to get into that.
6:36
But an LLM, just to the real brief explanation is: I'll just use an example of a sentence that includes the word slap.
6:45
So, that song, Slaps, Slaps being in that context, a Gen Z term, not necessarily Gen Z, but you know, Gen Z term for like I really like that song, that song Slaps.
6:59
He slapped the table.
7:01
slap in that contacts meeting, somebody physically struck a tape with older paradigm, paradigms of natural language processing, slaps and slap couldn't meet it could only mean the same thing. So, there was only 1, 1, 1 context that the word could be used, but there you can see that the same combination of letters that same string of letters had to completely different meanings.
7:26
So enter this paper that from Google that came out in 2018. Attention is all you need. Introduced something called the transformer model. Now, this is what the transformer model did was apply all kinds of context so that in a scenario where you see that song slaps, there's enough context around that string of letters. Slap to say that to say that it has an entirely different position than than the physical verb context that was used earlier.
8:01
So, what these systems represent, what these large language models represent, are looking at large bodies of texts, large bodies of context, to apply different numbers, two words, given a scenario, and I like to say apply different numbers to these words, it's called embedding, but it takes a little bit of the magic away. I'm here to take a little bit of that magical way and, and really just like, get down into what these systems are doing.
8:27
So, when you, when you apply, when you have that different context window, on what these words mean, then, when predicting the next word, like, if somebody inputs a string of text, then when predicting the next word becomes a lot more realistic, and convincing, or predicting the next string of numbers, numbers that's applicable.
8:50
That's the large language model aspect. It's applying deeper and deeper amounts of context in new and exciting ways that transformer model that I just brought up, that's kind of the dominant paradigm right now. But there's actually another model that that set of models that that's coming out that there's active research on it, but I'm not gonna get too deep into that too.
9:11
Nerdy for, for, this, got a session right now.
9:14
What we'll focus on next is another aspect, which I think that is less well known, but does any, has anybody heard of this, this term, Reinforcement learning with human feedback.
9:28
I think you can use the chat and respond.
9:31
But the other layer on top of this is that there is a human element to the current paradigm around around chat GT.
9:40
So, what's happening is that they're running these models and they're there, they're testing responses to various text input and people are being asked to say whether they like or dislike the response. So, this is a process that involves, you know, this isn't a process that necessarily like it's not a bunch of people who graduated from MIT doing this work when It comes to this process called data labeling Data labeling ... Content To make if we're doing it outside of the US. In the 2, 2 to $5 an hour range or within the US. I've seen rates of about $15 an hour.
10:22
I wanted to emphasize that, just to say that what's, what's, what's happening under the hood? We're looking under the hood of these models, just like we look at it under the hood, at a dealership, know, us, know, a little bit more about what's going in.
10:34
But now, that helps us understand what's actually going on.
10:38
A predictive mechanism to identify what number comes next. What string of numbers comes next, Representing a string of text, which we interpret as a word.
10:49
And then a rating process where people, who, we don't necessarily know who these people are or what their preferences are. but there's enough of them rating these responses to where it trains the algorithm to generate things that, that, that, that we like. So I just wanted to highlight, just to bring attention to a little bit of what's going on under the hood.
11:10
I don't wanna spend too much time on this, but I think it's important level set to understand that, hey, it's not magic happening here. And I think I successfully got through this without mentioning the term artificial intelligence, because I wanted to bring it all the way down to the nuts and bolts, for folks who want to keep an eye on.
11:33
So some some of the an impactful ethical thinking and business context around some of these things.
11:39
There is a paper that came out in 20 21, I a at the time, Google ethicist doctor to Meet Get Breaux, doctor Emily Bender, and doctor Margaret Mitchell, it's called on the Dangers of Stochastic Parrots. And, again, it came out in 20 21, but it just outlines the things that we should be paying attention to in, considering how we use large language models calling to attention. And again, this is before all the before, the major hype wave around it, but calling to attention the potential for disinformation. Or, the potential for biases in these results, And things like that, I think is written. It's a scientific paper, peer reviewed, but written in a very accessible format. And I think that that's, that's a really good resource for people who are looking at understanding of these things, understanding how and when they should or shouldn't use these approaches.
12:37
Thank you very much. Chris, do you want to introduce the poll? We'll find out, you know, whether you whether people are sort of on the same page, or whether we need to do a little catching up with him. Yeah, Yeah. I would love to go over that.
12:51
So, Ryan, why don't you go ahead and show the poll.
12:55
Beautiful, OK.
12:56
This is the single question poll, right? We're all curious about who are we talking to and what's going on? So check it out. I hope everybody finds this much value. Like this is brand-new to everybody, at the same time, this is like the, the real live React, but wow.
13:15
Aye.
13:17
I'm actually surprised that the top one is, I haven't tried them.
13:21
And, again, like, like, I probably expect to be in production amount to be a little bit lower. So, very interesting to see the breakdown between those who haven't tried, those who've tried a little, and those who are starting out and starting tests, actually like that. That 29% category that does that, does make a lot of sense to me. We're sticking our toes in. The water were attending conversations like this, If we add all the numbers up, though, 33,395, we get to 100, and that's 100% of people curious about this discussion. So, really, thank you for joining.
13:58
OK, Ryan, you can go ahead and close the poll.
14:01
Right, so let's, let's move on to a hover. I know that you've been following this developments in this field really closely in the context of content, but can you can share your broad perspectives on generative AI, ... policy, things like that.
14:16
Sure.
14:17
So I think that a really good way to think about this is that we've never had such massive amounts of data before. And what these computational models really help us do is take those large amounts of data and try to find the signal and the noise.
14:32
So one way to think, and then what they do is they take words, they turn them into zeros and ones, then they apply really beautiful, complex math to make predictions. So a great example of this would be if somebody took all of the webinars that I've ever done and fed them into GCT or some LLM and asked it to predict how many times I would say the word in an hour long webinar, it would be able to do that.
15:01
But here's where it gets interesting, if it only took half of the webinars that I've ever done and put it in, then it would be missing a huge amount of data.
15:10
And this is where this technology really reminds me of the Prometheus myth, where Prometheus stole fire from the gods and gave it to human beings, not recognizing the incredibly destructive power that fire has, versus what fire can also be, which is incredibly exciting.
15:27
So, this technology, this generative chat, ..., artificial intelligence, you know, goes by a lot of names, if it walks like a duck and quacks, like a duck, it's a duck, is one of the most exciting things we've ever seen, because now we finally have an ability to feed large amounts of data into a system, and get predictive analytics on it.
15:50
But if we don't feed it the right data, then we don't get the right thing out. So, whatever these models are trained on, is what they are going to spit back.
15:59
And that's what's scary about it, to us, in healthcare, because there are so much misinformation.
16:05
Now, what's out there, what people are learning about what they're going to on the internet, may not actually be, right? It may not be scientific, it may not be evidence based, all the words that we use.
16:18
And so, it's a scary thing in any industry, quite frankly, but in health care especially, You know, we talk about, first, do no harm. And these models have the ability to do tremendous harm when it comes to health care.
16:33
So, I think that it's exciting. I think that you could imagine that one day, you can feed your genome and your sleep habits and your exercise habits, and it could spit back to you a personalized health care plan. It's going to help doctors move away from being data clerks and doctors into actually becoming healers and practitioners. Again, it has so many exciting applications.
16:54
But to use it for communication, you have to remember that. if it's dirty data end, you're gonna get dirty content out.
17:01
And that's, I think, the really important thing for all of us to be focused on as a community that really wants to do no harm right alongside the business clients that we serve as our doctors, and our stakeholders, and, quite frankly, the public.
17:15
one of the things, and when we were getting ready for this webinar, you talks about, was, you know, we don't really know what about social media in sort of the concerns that were sort of like hearing now, you sort of see this as a parallel situation.
17:28
Yeah, so, I think anybody who's following what's going on in congress, Congress, is definitely scared about this.
17:34
They had hearings last week where they talked about it, regulating it, maybe setting up a separate agency to regulate it, making sure that everybody is honest about the data, that they're putting into it, what data they're using to train these language models on, the horse might be out of the barn right now, for regulation.
17:51
I do want to call our attention to two things: 15 years ago, when we gave people, social media, we never could have met, imagine that in 20 21. We would see a statistic, like 25% of young girls have created a suicide plan.
18:07
That is shocking and upsetting and the researchers out there. They're not sure exactly what the mechanism is, but there's no doubt that mental health issues are on the rise from children for children and adolescents. And it's clearly starts to rise right around the advent of these tools.
18:24
So, I didn't know that. I have a 20 year old and a 17 year old girl, I gave them social media when they were about 12 and 13 years old. I have a son who's 14 now. He has not gotten social media because I learned from the experience and it's scary to me that now, 15 years later, we have another technology that could be even more harmful.
18:44
And more scary. Chris Pace has a great line. He says, I saw terminate. So, I think that, you know, we should be paying attention to this.
18:54
On Monday morning, somebody produced a picture of the Pentagon on Fire.
18:59
The stock market went down, but it went all over Twitter, under Verified Blue Checks.
19:05
So, if you don't know that people are paying for those Blue Checks now, you think, you know, that this is coming from a news agency that knows what it's talking about. That picture turns out to be a deep fake, and it was AI generated.
19:19
So it's scary when we're dealing with fire. I mean, we're dealing with fire that we don't let children touch. That we're very careful about, that we have regulation about in my state.
19:28
If you don't have a fire alarm in your house, they won't pass your house for inspection when you build a new house.
19:34
So, I just think that what's going on is scary. And I think that we have to be stewards of this technology, and we have to use it in Smart Ways. But, it's not ready for prime time. It's just not there yet.
19:45
And we gotta be smart about how we're using it and how we're applying it, just like everything else that we do.
19:52
Millimeter, hmm. Absolutely. So let's talk about the hospital's perspective the Health system perspective. So, Ellen, you increase case both worked for provider organizations. Can you share some of your personal views on ... as well as what? your, what your leaders are talking workmen Or discussions? Are they having?
20:11
Sure.
20:12
Well, I'm just glad that that I'm just glad that Chris and HAVA laid out the Technology and Math request I'm a writer and a communicator. And those are way outside my wheelhouse.
20:26
Looks, we froze up there on.
20:31
Why don't we wait for Ellen to unfreeze? and maybe we can ask Chris Pace to respond to that question.
20:38
So what was your experience with Chet ... and what are your leaders talking about, and how are they dealing with it?
20:43
Yeah, So, what I heard Alan starting with is that he's a writer and a communicator.
20:48
I am known as the mad scientist, so I like to try new things to tinker with things and figure out ways to work more efficiently.
21:01
And, yeah, we have, uh, I would say, you know, for the size organization, we are, We have a pretty lean team that is really full of strategists and no experts at addressing audiences in the channels that we operate.
21:22
Um, so, for us, no, it's really the people, the process, and then the technology that can make it all work together and so, as we've been experimenting with different elements, the layering problem with the open AI version of generative language is the fact that it's a harvest point. It's, it's crystal clear that it's, it's sourced off of a single dataset or, you know, a narrow dataset.
22:01
It's unknown what those datasets, R so, you know, if you're looking at content building in the eyes of Google and SEO, yeah, there's three pieces to the SEO puzzle, expertise, authoritativeness and trustworthiness.
22:20
If you don't know where the data's coming from, them, therefore, how can it be trustworthy? You're all right. You know, like Banner health already earned its space as being an authority and an expert in health care.
22:34
And, oh, by the way, we just weather the storm of the biggest this information, misinformation, wave, everday, strike, healthcare, so, you know, we earn our spot at the table and to go backwards from there and just go the easy path. Yeah. There's, there's a slippery slope there.
22:54
So, what we're finding with, with our use cases, is, it's a good thought starter.
23:03
It's a good way to try to drive different ideas off of a single thread, of, of information.
23:10
And, and then we still go back through our process, so that, that's really the, that, the nuts and bolts of it. Now, at least in the marketing space, there are a lot of other use cases where it makes sense in the clinical setting.
23:26
I'm not a clinician, so I'm gonna, you know, stay away from the sun as best I can there.
23:31
But I think, again, you can't replace human emotion.
23:37
Yeah, maybe we will in our lifetime, but at this point, it's not been done with a level of effectiveness that we can, we get confused by it. I haven't seen Terminator two and I do not want to walk down that road.
23:52
So, I think it's just, we have to balance, um, you know, good policy, good ethics, and then also, what's best for customers. And for customers, at least as of May 24th, 2023, they like to rely and trust in the providers and what their doctors say.
24:14
And those are people spelled, so, OK, oh, and you're back. You'd like to, would you like me to ask the question? again? Are generated.
24:24
Now, I can, I better do that before I get cut off again, somehow. And Chris made a lot of great points. I think I would agree with. You know, we're definitely a provenance in the communications team, experimenting with this to see what it can do. And I think my advice to everyone, I get a lot of questions about this. You know, for communicators, I think you need to try these tools and see what they can do.
24:42
I'd like a habit that they might not be ready for prime time, but they can do some things. And I think there is good application here. You know, a lot of our staff in terms of communicators, we've been doing a lot the last three years.
24:54
Burnout is high. Workloads are hard. Work in this, take the edge off. So I know that I've tried this and doing some of our internal communications is thought starters, looking at, you know, things like press releases and saying, create a summary.
25:06
Now, it's not going to be the ultimate thing we send out, but it might take some of those tasks that are hard to start, or that are frustrating for us and make the burden lower. So I think that's one really exciting area. I do, you know, have concerns about the quality of the content that comes out, it's good enough to get you started. But there's issues around copyright and originality and confidentiality. There's so many things to look at, so I'm just feeling like I'm cautious, I'm optimistic, but I'm really being cautious with this.
25:35
But as a communicator, you've got to understand what's out there.
25:38
You've gotta look at these tools, I think, to bury your head in the sand and say, We're not going to have to use these, that's that's not going to be the truth.
25:45
I mean, so many of these tools are going to be incorporated into even like the Microsoft Suite of products. Everything you use to do your work, It's gonna start incorporating this more.
25:54
so the more you can get in on it now and have a perspective, I think that's really valuable, But I would just be very cautious about things that you want to have authenticity. Like Chris was talking about, your brand voice and where you really want to come across with the human emotion and things like that. That's where you're going to need to work with this. So, a lot of, you know, optimism about what this can do, but I think we do have to take it with some caution right now.
26:17
OK, let's um, just a couple, audience, responses or questions, slip. Is also in North or procedure, Chris. So it has another meeting yet another meeting. And what about, what about privacy?
26:34
Hit the other big word that we're all worried about. Yeah, there's more things that people are concerned about. They would like us to talk about two so. What about what about privacy and HIPAA in terms of, you know, prompts and outputs? You know, the breaches that have already happened. Somebody mentioned Samsung.
26:56
Any response to that?
27:00
I was just going to say.
27:01
I mean, obviously no sort of protected information should ever be entered into the system right now. I mean, just that it seems like basic.
27:10
But at birth stain and I also feel like nothing confidential about the work you're doing. Your organization should be included in there because we just don't know what's going to happen to it.
27:17
So, you know, as communicators, we're often working on things that are really brand sensitive or audience sensitive, and that's up to just not be entered into any of these tools Right now, it's just, we don't have a feeling for safety. So I think high caution is just, it's gotta be exercised, the highest degree of caution right now.
27:33
Yeah.
27:34
Oh, God.
27:36
Yeah. OK, I'll be quick. I've been shocked by the things that have made it into the, into these approaches. So I tell everyone to think of a rule of thumb, like, if you wouldn't say it public on Facebook or Twitter, then it probably shouldn't go if it's something that you value any kind of privacy for. Know that it will be used in a dataset whenever, whenever you put place it there, and know that they're, there could be some significant consequences for the types of stuff that you're putting there.
28:08
Mmm hmm.
28:09
I mean I think what's happening with the Office of Civil Rights and HIPAA is really instructive.
28:13
I think that yeah, No, it's a complex law that was written a really long time ago and for them to come back now and sort of wake up and be like, oh, every URL is protected and you guys can track your data. And, you know, it's kind of sort of like thanks for coming really late to our party and shutting it down. And now, I'm happy to see the American Hospital Association and no date on them.
28:37
But, they finally released you know, pressure as a lobbyist organization to say to OCR what's up so I am happy to see that those things are happening.
28:48
But tobacco Chris, pieces point. What patients want to patients want to be able to go to their portals and see what's going on and have their privacy protected. Do they want to know that this information isn't being shared? if it's useful to the hospital or healthcare organization to serve them better. I think that that's what's really interesting about this. And, you know, people will put tick talks on the weirdest, most intimate things that they have.
29:13
OCR has decided that their URL needs to be protected when they go to a blog, about how do I get my kids to eat broccoli.
29:20
So that's kind of where I think, the government is. So behind what this technology really is, I, I always think about, you know, that scene, where the Senator said, look, I think was hatch he said to Soccer Berg, how does Facebook big money and space like the entire shock?
29:35
And Horror of the American public was like, we sell ads, senator, so that's the whole privacy question is just it's like dinosaurs trying to legislate AI, it doesn't really make any sense. And so, I think, and then then the first Amendment comes into play.
29:50
And so we have a lot of really complex things to sort out, but here's what I'll say about that.
29:56
Hospitals and health systems themselves struggle with workflow and process and getting everyone to follow that. Now you're gonna go try to train an LLM to handle that stuff.
30:06
There's just no way that that's gonna work. The process has already been documented, and they're not even being followed now. So, to Chris Pieces point, Get your people and your process right first, and then sort of, you know, look at your technology. Let's stop chasing the shiny object. and let's get down to brass tacks. How are we going to make sure, in our communication offices, we have voice and tone documented? We know exactly what a finished product of content needs to be looked at for strict stringent editorial standards. We stand out in the marketplace now that there's all this synthetic content being produced by these .... What are you going to say in the marketplace? That's custom know. You put something into the chat, and he can generate an article for you.
30:46
That sounds like a 19 seventies, prep school boy with a non for an English teacher wrote no no, no shade against nuns, but you know it, I call it bow tie content. It just sounds ridiculous. You know, it's like three clauses and you know, at least it believes in the Oxford comma, so we'll give it back. But, you know, how are you going to stand out? if you go interview a stakeholder about what's happening in terms of neuroscience or cardiology? You're gonna get such fresh, beautiful content at the end, so I think that the privacy question is a really good question, but I think this synthetic content in the process and the procedures question, really needs to be the thing that we tackle head on.
31:26
Just one other point to add on, because with the OCR guidance or ambika ambiguity, depending on how you look at it, on what the future of digital looks like in healthcare, No.
31:44
I mean, I'm speaking, on the banner side, are: I've never seen more engagement between legal, privacy, InfoSec, cyber security, and IT. like all being in the same room at the same time.
31:58
And, you know, for me, like, I can't copy text from an e-mail and paste it into a web browser, You know, if I'm looking up an address or whatever.
32:07
So, you know, InfoSec is not gonna like to let chat GBT just be available to employees on their devices once they find out about it. So I think, you know, you have to be smart about, again, the technologies you use and the application you use them with.
32:27
And then also, sort of back up from there, and say, OK, like we've got skill sets that can do these things, like what can we do to enrich the employee experience so that we can enhance the customer experience? And then just leave it at that, and there might be technology that NSX.
32:45
But know that you're gonna have to probably explore, you know, BA's, MSAs, with these organizations.
32:53
And that's good because you can, then created and formed LLM. That is based on facts that are, you know, true to your organization and how you approach those things?
33:05
But, again, that's not gonna be fast, are cheap. It's going to be, you know, the right thing to do, but it's gonna require investment. Nothing free is good or cheap as easy or whatever, whatever the old saying goes.
33:19
So, one question from the audience is that employees are already using .... So have you in your organizations, do you have policies in place?
33:29
Like there used to be installer policies for social media, hospitals wouldn't let you go on Facebook.
33:36
Alright, yeah, so, you know, here we are with a whole new technology, but it seems are your concerns.
33:42
We're not allowed to post photos to to Facebook from personal devices. That's how tight we are, but tragic isn't fallout specifically.
33:53
But I would say that, again, it's, it kinda goes back to, you know, the right usage, and, and the intent behind the usage. So I, I, I don't know that.
34:07
Here's my own Mac equipment.
34:09
to, to, to do what I need to do.
34:13
But, again, if you're, if you're on company equipment, just know that everything you're plumbing into there is going to be tracked by IT.
34:21
And that any of those activities could be questions. So, you know, you just have to be smart about what you're doing, and why you're using it, and involve people in that conversation, versus trying to ... around the processes.
34:38
I think it would be a big mistake for them to shut it down, because I think it could be a really fabulous productivity tool to Alan's point. You know, as a writer, when you're staring at that blinking cursor on a white page, it sometimes can feel overwhelming. So to just put in some prompts and say, like, give me seven headlines for writing about how to get my kids to eat broccoli. Or, you know, give me three tips that you know. And then taking that information as one part of your research, right?
35:05
So then go out, do the Google searches interview with me, or talk to me, or just, you know, look at a couple of different articles around this.
35:12
Try to find some published data around why we know it's good for children to eat vegetables, and then combining that content and writing a human centered piece that follows your brand point of view, and voice and tone, is a great way to do it. But as a running off starting point, I think it can be really helpful. I think it's great with looking at meta descriptions, you know, you know, trust, but verify is, I think, a great term to use here.
35:35
Use the tools, but then do your own research to make sure that it's right, and I think it would be a big mistake to take it away, do. I think that you should use it to publish content. No. And that's terrifying, because you're just going to end up adding to something out there that you don't want to do, but do I think you should use it as a productivity tool? Allen said, we're all being pressed. We're tired.
35:57
How many times can you think about a new way to write about why your kids should eat broccoli? I think there are a lot of really great things that this tool can do, and again, I think the fire metaphor is the best one.
36:07
You take it and create a piece of published content, You're burning the house down. You take it to get seven headlines, and then you figure out which one you want to edit to make it sound like your brand.
36:17
That's, that's lighting a candle to make the house smell nice.
36:21
Yeah. And I mean, I think the other thing, we have to be realistic about it. I mean, this is going to be fully integrated into Microsoft Suite of products. So, you know, any organization that uses Word and PowerPoint and things like that. It's gonna be there.
36:34
So I mean, some people may hold off on this for the next six months.
36:38
But it's, I mean, you're not going to win that battle. It's not going to stand on our workplaces. Absolutely not.
36:44
So one of the questions from the audience is, what use cases do you envision for marketing specifically? We've talked a little bit about it, but let's just talk about that specifically. You know, Chris, Chris, why don't you start. We haven't heard from you for awhile, so you must be thinking about this.
37:00
Know, I was, since I work at robot, not directly within marketing, I, I actually think it would be better to start with, start with someone else. But I'll, I'm going to steal a little bit from a hover in terms of generating headlines to figure out which, one works. I have a YouTube channel that I do for fun, called, Music niqab, where I read horror stories and stuff.
37:23
And one fun thing to do there is to, like, based on the story, and based on the summary that I read, that, that's being read.
37:33
I'll take that, I'll take that and have a little back and forth with, with an LLM on what a lot like what a good set of titles would be. So, part of it, like, just part of my overall framework for practical use is to start with, like, rather than doing something that, like, like, asked me for, Hey, here are a bunch of headlines. I'll first prompt it on the type of contexts. Like, imagine that you're, well, the first thing I did to generate the title, was, asked it if it knew about this YouTube channel called ....
38:09
And by its response, I was able to use the context that it understood and ask it to generate some responses that were within that context. But the, the main thing that I want to say for any kind of, use of it is to, like, the naked outputs, are just not going to be of the quality that, that you might be looking for, but when it comes to getting high quality, very specific contexts, it requires a lot of prompting and backing back and forth with it.
38:37
Hmm, hmm, hmm, OK, so, specific use cases, are there any that you're thinking of, or plenty already? Chris in, Ellen.
38:52
Now, you know, one thing for hackers thinking. All right.
38:57
Yeah, so I think we have a few use cases that make sense.
39:04
No one is like through AB testing or multivariate testing and like an e-mail journey as an example.
39:13
You know, there's there's there's the option of creating options By prompting it appropriately. And, I think that's the skill that is going to be the most beneficial for folks too.
39:26
Get the best use out of tools like this is on know, Chris age for described. You know, imagine your, or, you know, you are acting as, you, know, and he's starting that conversation.
39:40
And then having, it asks you questions back so that you can, you know, then, responding in, turn, and get the best output.
39:49
But, you know, thinking through something, mundane, like uh, orthopedic journey, what say, You know, there's there's steps along the way that, you know, you want to align to some goals, you know, we're looking to reach this audience so that you can provide it prompt.
40:08
This is the audience we're addressing. These are the channels We're using. These are the goals that we have.
40:15
Um, you know, how do you, how would you address a couple e-mail topics in month 1, 2, and three of a journey.
40:24
And, you know, I mean, again, you don't have to use what it puts out, but it's a good way to think through it, and then have more informed conversations with stakeholders, So that that's kinda, that's one of the use cases. I think there's some social media use cases as well, but I'd be curious to grow Alans thinking about this from the broader communication lines.
40:45
Yeah. Well, I mean, you know, I think they see a couple of applications right off the bat.
40:49
one is just like working. There's, kind of an extender of your workforce going away.
40:53
So, you know, you can have this You can have some of these tools give you a framework for a marketing plan or a communication plan. It's going to have a lot of holes.
41:01
But it's probably going to save you like, 2 or 3 hours of work right off the bat.
41:04
If you're a manager of people like it the other day, I asked, ... you to write a job description or something. Well, in the old days, you know, I'd go out and probably look at, Indeed, probably look at other places.
41:15
I've probably been 30 minutes looking for JD is that I can say, like, I wanted to take elements on it. Well, it can spend Miata JD And one minute, and then I can spend five minutes revising it. So there are things like that, but I just feel like some people have described Chat Tea Party is, like, Your intern, you know, like, your new intern, like, all those little things that, you know, would take you just a little bit of time. So I feel like it's kind of a workforce extender, and that's like a use case we have right now. Not replacing people, but allowing you to do more, allowing until you get higher value tasks on your plate, and the low value stuff off. So there's all that. But I also think about when you have assets, you know, like, we've all done campaigns where we have print ads and social and all this stuff. Well, we can put those things into somebody's tools and say, No to, maybe we only have copy for one thing, but it's all done. It's finished. It's written, we like it, it's got our brand voice, We can say, create five social media posts out of this, create whatever, and again, we're gonna have to look at it.
42:05
We're gonna have to make sure it's good, but it can do a lot of that revising and editing and things like that, as long as we have stuff already dialed in. So I, I feel like that's where the use cases right now.
42:15
You know, I'm looking at how it can extend your ability to just do work. And, again, with that caution and thinking about the safety.
42:22
But, those low value things that, again, were just, you didn't have time.
42:25
That's, I think, the best thing to look at right now.
42:28
I also think that, you know, Adobe just announced yesterday, that you can use it in Photoshop. And if you watch that video, I mean, it's unbelievable what you can do now, and like. Second, that you weren't able to do. Or? so that the images, I think, is gonna really help people now. So everybody says, like, OK. Well, we're going to put people out of work. No! We're not, We're going to adapt and we're going to have up.
42:50
You're gonna need more Photoshop engineers who really know how to prompt the technology, to make the right photos. You're gonna need editors to look at the photos to make sure that there aren't any possible problems with them, any deep, fake issues with it. That you're gonna make sure that your editorial and brand style guides are being followed. So I think it's exciting. I mean if you have a stringent editorial process in place for every piece of content that you create and you can find shortcuts, it's a beautiful thing.
43:19
The one thing Chris and Alan didn't say that I think I've heard them say, but I want to make sure that they agree about, is that you cannot use this thing to build a strategy.
43:27
You can use it to write like a campaign plan, and then fill in all the holes, but as far as I know, thus far, like I was asked to give me a list of influential Jewish women, and it put not ..., name man, identifies as a man, I don't know what dataset it was given.
43:45
But, I would be afraid to ask it to do anything strategic. Because it doesn't really get facts right, half the time. Well, not half the time, but then it hallucinates, which is great.
43:53
Now, we have, know, Rasheed loosen it anyway, just what we needed. So I'm just wondering if any of you think that you would use it for strategy?
44:05
Yeah. I agree with you right now. I would not.
44:08
And I think the value for professionals right now is to take the strategy, you know the experience you know, and married up with a tool where it makes sense.
44:15
I, well, I, on a personal note, like one thing that I really advise people, it's like, when you're on a platform like LinkedIn, and people are coming to you for your advice, your strategy, your experience, that's an absolute worst place to just generate ... and throw it out there. Because it doesn't give any of your insights. It doesn't have any raw authenticity. So that strategy piece, the experience that you bring to the work.
44:37
If you can't do that, that's that's not what it can do now, And I think that's going to be something with struggles in the future.
44:41
And the value add for communicators and marketers in our field is blending, like knowing how to use these tools, where it makes sense with our own experience and our own strategic vision and things like that. That's where the value is really going to add.
44:55
Yeah, you're looking at like the vastness of unstructured information still.
45:02
I mean, we can plumb, you know, terabytes and terabytes of information into language models.
45:07
But, it will never replace, you know, the, the 1 to 1 hallway conversations.
45:14
If you still have those anymore, I just talked about how I haven't been new with office years.
45:20
But, um, no.
45:21
I mean those organic conversations that build contacts and frame what people deliver a strategic thought and creative thought.
45:32
And then apply from there, you know, you kinda ladder down into tactical things eventually.
45:40
I feel like that's where we are in this, in this era of language modeling, but, you know, again, Terminator two figured out a way to take over the world.
45:51
So, we'll get there one day, I hope. I hope it's not just time. Do we have time for one more?
45:58
Yeah, OK. There's a term I want to throw out. It's called the McNamara fallacy.
46:03
And what happens with McNamara fallacy is that if something came from a spreadsheet or came from from some dataset, people have a tendency to believe it more.
46:17
So, the one, like, as you're having that back and forth, conversation with ... about what constitutes a an effective headliner, or what have you.
46:27
But we're just be aware of the fact that, uh, for a lot of people, in a lot of cases, just because it came from data or just because it came from this, this AI approach, you might be more susceptible to it.
46:41
So, to like keep those, those principles that you're grounded in, the strategy that you've learned more firmly into play before like the thing that techy PT generates, Like, you have a, just a lower threshold for her, for the quality level of that content.
46:59
I wrote my bio once, and I don't know who that woman is, good.
47:03
I don't know how well they pitched.
47:06
Lot of bad information about mine, too. So, let's talk about the skill, the new skill, which is writing prompt.
47:14
So Boston Children's recently put out a job spec for a prompt engineer.
47:18
It's mainly on research and clinical applications. But one of the responsibilities is, test and evaluate performance of AI prompts. Do you think writing prompts will become the new skillset for marketers? And what have you learned by writing your own prompts? And the thing is, when you think about chat, GVT, you think, you know the word chat in it, but I've only ever experimented with it asking one question and not having a whole conversation, have you have you sort of explored the whole writing, the prompts and experimenting with prompt?
47:51
I, you know, I would just say that, that was really smart PR and marketing for them to post that job description, because I think that was probably the highest level order for that, but I give lots of appreciation for that. But, on a serious note, I mean, yes. Like, writing prompts is going to be a skill set that is helpful. I don't think it, and I don't know if everyone on the panel would agree or disagree. I don't think it's incredibly difficult, actually is pretty easy. Like, if you're good at asking questions, if you're a communicator marketer, you're probably pretty good asking questions. This is something that you get a feel for and what?
48:20
You know, it's not that hard but I think it is going to be something you need to know how to do it. And, you know, I also use the law to regenerate a couple of times. You know, I can't be satisfied with the first output. But yeah. I think, you know, knowing how to do prompts is going to be a good skill set that's going to be valuable for you to say you've done it.
48:38
I don't think it's incredibly difficult. That's at least how I feel about it right now.
48:42
Anybody else?
48:45
Playing around with prompts.
48:46
I'll just throw in. Red, that job description, which prompted me to look at a bunch of other prompt engineer job. Yeah, Sorry about that. That was a joke. Techy PT, it was at least.
48:59
But, when I looked at a bunch of these job descriptions, it really looked like machine learning engineer, who does prompt stuff. Meaning that, like, the requirements that you look at, somebody who's already an expert in natural language processing, and deploying these models at scale, but adding on the responsibility around prompting. That's what it looked like for now, But that, that could evolve.
49:23
Um, I love reiterating the fact that, like, there's, there's strategy and things that we're trained with over a long period of time.
49:31
And, again, think about the fact also, that the conversations that you've had with your leaders and mentors, they didn't make it into the dataset that chat GCT and all these are trained on.
49:43
So, when it comes to, even when, when you're evaluating someone to hire, are prompting skills, the thing that you're hiring for, or is producing high quality output.
49:53
Like, the things that they, that, they've done on a consistent basis that we're hiring for, So, that's just, my thought, is, to, focus, is still on the result, prompting, help to get there, then, what have you.
50:06
But, the, if, the result is high quality output, then, it looked like that, whether that came from prompting or not, might not make a difference.
50:17
I am, the best job description I saw was on my fingers. Microsoft had a kill switch engineer for AI.
50:26
And a person makes about $300,000 a year, and is the Yeah, the plug unplug the machine in case of Skynet scenarios.
50:36
Could you put the job description in the chat so that, in case anybody wants to apply for? This conversation reminds me of one of my favorite movies, is War Games from the eighties. It's a great, great movie. And it's basically about this whole issue that they try to get a computer to run their nuclear war scenario.
50:54
and a kid breaks in through a back door through gaming system. And somehow this computer decides to like launch a nuclear war against Russia. And at the end of the scene, Matthew broderick. like trying to get it to play tic TAC toe to get it to learn that there's no winning and tic TAC toe. Right. Like, how many times have you play tic TAC toe? There's no winning in a nuclear war, right? And so he's hitting the keyboard.
51:15
He goes, Learn, goddammit, learn, and I think that all of us are learning alongside this thing. We're learning how to prompt it. We're learning how to use it.
51:23
We're learning how to be smart about it, how to be good stewards of it, the minute the public loses confidence in it.
51:30
We're doomed. I mean, think about the FDA and what they did with .... Now you can't get a drug approved. I mean, we had a lifesaving vaccine, and it had to go under emergency authorization. There was fighting about that. People didn't want to take it, because they didn't know what it was going to do.
51:44
So, something big and bad happens, particularly in health care. And one of us was responsible somehow, for not making sure that we were really careful.
51:55
It's going to be a disaster.
51:56
So, I just think I think everyone's super optimistic about it. But I think we're all learning. And when you're in learning mode, you just gotta have guardrails.
52:03
And you gotta be smart and check with your colleagues, you know, ask them, does this feel right to? Or should we go further with this? And, you know, lean on your partners and your agencies. They are at the forefront of stuff, They're thinking about this stuff all the time. We're experimenting at aha media, group with how to make our work faster, and more efficient, pass on savings to our clients, or generate even better content now that we can sort of move through datasets, being really, really careful about it.
52:28
So, I think the future is not, you know, Arnold Schwarzenegger coming back to Save Us all. But, I do think that we have to be really smart and careful about it.
52:39
So one of the audience members, let us know that there are 4000 open positions in the US for prompt engineers right now. So thank you for that act.
52:49
And I'd like to switch how many kill switch jobs. Let's say that according to its Homer Simpson job at the nuclear plant. OK, so let's let's just get through some audience questions for the last few minutes here, just to make sure that we can pay attention to what people are interested in. Are you looking to incorporate multilingual, if yes? How?
53:17
Anybody have a response to that yet?
53:21
I don't know.
53:23
I think it's something that we're, we're probably looking at, we're not there yet. But that is interesting. As far as like, how fast can do translations? I mean, think about so much of the work that we send out for translating services. All that. There are going to be all the questions about verification all that, but I think you really could potentially have a streamline workflows there. That's a, that's a great one to look into. Yeah, so we just launched on banner health dot com, full Spanish translation.
53:48
We soft launch, so I guess this is due the big announcement.
53:52
But, um, we have like 6 or 7000 pages of content that we worked with Google, Google Cloud.
54:02
So under a BA, two, inform the data dictionary and we worked with clinical translators at Banner two.
54:14
Validate the data and, you know, we knew we wouldn't get it exactly right.
54:19
But it essentially turned our website into two sites on an English one in Spanish fully SEO Friendly.
54:29
Think we're one of the first to do that, so.
54:31
It's.
54:32
It's something that we felt comfortable with because of again, it was more of the contract and how we worked with um our partners to get there.
54:43
Yeah because we could have just gone on the fly widget and let it do the translation then we learned a long time ago when we launched our exhale brand that in Spanish if you use the term XLR, that's like don't let the dying breath.
55:02
Versus ... is the breath of life. And so if we had just gone with Google Translate its version of our brand statement. we probably would not be where we are today with market share.
55:12
Wanna throw one thing in which is this isn't from a healthcare marketings for perspective, but it's from a personal friend who is a legal translator specifically for documents legal, documents back and forth in Portuguese.
55:28
So it's a highly specialized and highly specific use case.
55:34
And his, his experience working, he, he, he's kind of see the writing on the walls with regards to the translation business. So, when Chat GBT came out, the 3.5 edition, he worked with it. And it ended up costing him more time because of all the edits that he would have to do in the backend.
55:52
But when ...
55:53
four came out, he switched, he went to the paid plan and was astounded by the fact that whatever improvements they made in the backend, which were probably not specific, specifically plan to improve Portuguese legal translation. But whatever changes they made at the backend allowed him to work much, much faster.
56:12
So the, the content, the, the, the there's kind of a leveling up that's happening with machine translation under these paradigms. But my friend, it has been doing these legal translations or the past 12 years. So this is, it, still falling at the behest of an expert to make sure that these translations are valid. But he was astounded by the fact that, like, now chat now, G P T four is in its workflow, because, like, the context that the previous iteration was missing this, other one gets to very surprising levels. I think that's a perfect example of what I mean by more jobs will be created. If you can translate documents faster, you need more editors to make sure that you're not making mistakes like what Chris paste talked about, You. Know, so, I think that, that's exactly what, I mean, if you can create more, you can make more job efficiencies. We're gonna find even new things. And there are 4000 prompt engineer jobs.
57:07
Then, then they're gonna be 80,000 in the next, you know, five years, or even faster than that. So, I'm a content strategist by training. I started as a journalist, and then I became a content strategist when I was in college. Nobody had heard of? content strategy. The web, It was web one.
57:22
So, I just think that it's going to grow and evolve and learn from it, and I think it's very exciting.
57:28
And I think we've talked about what the, with the good, and the bad, and the ugly, isn't it.
57:34
I'll plant. one thing that it's a little bit at odds, is that the right now, yeah, we got it. We're going to battle it out.
57:42
Those are, the rates that he gets paid, have been consistently cut in Half, have been cut in half recently, and the job, the jobs that they're having people apply for on the website, no longer involve hiring and translators, but involve hiring people with the large labs with a lawsuit. Those jobs are gonna change, there's a learning curve here, right? Somebody's gonna make a mistake. It's going to hit slam them, just like a health care institutions. Somebody here is going to be the first one, you know, in the first one through the wall as the bloodiest. Then the executives are gonna go, alright, it's kind of really important for us to be careful about this.
58:18
Let's bring those people back in. We've seen it all the time with everything, so I just don't think that change.
58:25
So, one of the questions is related to this conversation often in communications, there's a requisite for prerequisite for putting disclaimers. No, this, isn't medical advice? Seek your physician's advice, Should there be added disclosures for patient information? It's partially generated artificially.
58:46
Yes.
58:48
This? This? By Chat GPG.
58:54
Yes, it's such a great question, because I'm not the real ***.
58:59
Yeah, well, it's a really great question. And I don't, I don't have a full answer on it. I think there is going to have to be some transparency. And I feel like a lot of people, and Chris could talk to this.
59:10
But, you know, if people are talking to a bot and they know they're talking to a bot, they're OK with it. If you try to make it look like a human and give it a name and make it feel like it and kind of hide it, That's not so good, so, I don't know. I think transparency is great how exactly we word it or do it.
59:25
We want people to understand what they're interacting with and I think that that's the best path.
59:30
I think one of the things we learned during the pandemic was how important human to human connection was.
59:36
And, you know, we launched as everybody did a bunch of self-service thanks, you know, including telehealth and online scheduling and so on.
59:47
And when people are up against difficult things, they just want to be able to talk through with the person.
59:54
So, it actually increased call volume to our call center for a period of time, and, you know, there's, there's a right balance of self-service, and, you know, I guess, synchronous or asynchronous communication depending on how you look at it.
1:00:09
But at the end of the day, the harder situation that the person's up against, they're gonna want to have that human conversation.
1:00:18
And that's, again, empathy, untrustworthiness, compassion are, are things that humans can do best and the formation and the right.
1:00:29
So, let me just interrupt for a second. It's renewed one minute past the hour, and we can continue to take questions for a little while maybe for another five minutes. So if you, there's still a couple hundred people that are on the call here. So we want to make sure that we get to as many questions as possible. So, we'll just keep going for the five minutes.
1:00:49
OK, here's a question about copyright infringement, Who will own the content?
1:00:54
So I wanna, I actually wanted to talk about that earlier and we got off a little bit on a tangent but quite a number of our clients have come to us and said, what are we going to do about protecting our content? We've paid a lot of money for this custom content, and it's being used now by other people to generate their own health care content, how do we handle that?
1:01:14
And my answer is, I don't know.
1:01:16
I really don't know. And that's why there needs to be transparency around the language models that were that it was trained on because we can write a piece of copy for one health system where the doctor says emphatically, this is the way that we do this, and we can write another piece of copy for a different health system where the doctor says, This is not the way we treat this. So these are both of those pieces of content being fed into the LLM and then, How does it know what the right answer is? So, I think that part of it is extremely dangerous, and very scary.
1:01:45
And I think it's also sort of like, they're just steal, like, it's basically plagiarism.
1:01:50
I mean, we the when when ..., when the first iteration came out, we ran like four pieces of content through plagiarism and found plagiarism in three of them, which was pretty scary to us. Now, experimenting with four, I think it's changed a little bit.
1:02:04
But again, it's using your proprietary intellectual property to train a model in capitalism. That's what we call not fair.
1:02:15
So, I do think that who's going to own the content and the plagiarism aspect of it and the copyright infringement is a huge issue When you ask it to generate seven headlines for you.
1:02:27
Not pulling from any one particular source, but if you are a hospital health system that's invested in writing content about rare diseases, for example, it might be using your content to write content for another health system. And that's where you just, you can't use it because you're gonna get dinged on Search, You're going to get dinged by legal. And if they, if Congress does pass laws about this or, you know, the States get involved in this, then you're really talking about Commerce and Legalistic Nightmare.
1:02:55
Yeah, I don't know. Rings powerful because there's a, we're kind of at the beginning of, like think about the legal case that Getty images brought up with images that were generated that even included. It's watermark in the output.
1:03:10
So content producers who make their their money on producing unique content are starting to put their heels in the sand and try and identify ways that they can combat the use of their content.
1:03:27
There's a writers' strike going on in Hollywood. And one of these is the big issue.
1:03:32
This chat, You know, this is a big issue for them. I'd like to point out, there's been no Saturday Night Live happen, they haven't had ..., right, this gets for them.
1:03:42
Just that I'd like to see Chechen Fatih written the citadel to an audience poll, for that one afterwards. OK, I think that's all the time we're gonna, we're gonna have for questions today. So, I'd like to thank the panelists, was such a great discussion, and thank you all, how Media, for sponsoring today's webinar. So, please, take a look at the chat. I believe there's a link there to media survey on plain language, and we'd love to, they would love to have your responses. So, just for NaN should be doing a quantitative and qualitative original research study about healthcare and plain language. If you have ever argued with a stakeholder about the fact that academics want jargon filled information, this is the survey for you. Are qualitative research has already revealed that that is not true.
1:04:35
And so we're hoping to generate some really great customer, original research that you'll be able to use, to talk to your stakeholders, about how important it is to use clear, concise language. So, please, do use that survey.
1:04:47
We're gonna post it on LinkedIn also, we're gonna put it in our e-mail newsletter, but it would be really, really helpful to have the healthcare community weigh in on this incredibly important subject, Thank you.
1:04:58
Great, OK, so just a reminder to keep an eye out for an e-mail in a couple days with a link to view the recording, and thank you for attending today's webinar. We hope you enjoy the rest of your day.
1:05:09
Thank you so much, everybody, thank you for being here, and we're happy to. I know I speak for everybody, but I know all of us are happy to continue the conversation.
1:05:17
All right.



Do you have valuable content that you'd like to get in front of decision makers at hospitals, health systems, and physician groups?

Contact us about advertising and sponsorship opportunities.