5 Major Risks AI Poses for Healthcare
Boston Children’s recently posted a job description for an AI prompt engineer to join its Innovation and Digital Health Accelerator. The ideal candidate needs a “strong background in AI/ML, data science, and natural language processing, as well as experience in healthcare research and operations.”
This came only months after the November 2022 release of ChatGPT, which alerted the masses to the potential uses (and risks) of large language models (LLMs).
Even in a sector slow to embrace new digital technology that could potentially compromise patient privacy, AI’s promise to reshape healthcare has many cautiously testing the waters.
In a recent webinar from eHealthcare Strategy & Trends, four industry leaders discussed use cases and potential hazards of artificial intelligence that could dramatically alter work — and even the world as we know it.
How Do Large Language Models Like ChatGPT Work?
Simply put, LLMs apply different numbers to large bodies of context in a process called embedding. “They turn words into zeros and ones and then apply complex math to make predictions. But the outcome is only as good as the input,” says Ahava Leibtag, president of Aha Media Group.
How to Be Good Stewards of AI in Healthcare
This process allows the LLM to predict the next words based on the input. “If somebody inputs a string of text, predicting the next string of applicable numbers translates into applying deeper amounts of context in new and exciting ways,” says Chris Hemphill, senior director of commercial intelligence at Woebot Health.
That output is tested in real time in a data labeling process where humans are asked whether they like or dislike the response.
This can feel like magic to the end user, but it’s not without risk.
In a new article, we discuss the opportunities, risks, and potential pitfalls of LLMs and reveal 5 major risks that AI poses for healthcare.
Read the full article here: Is ChatGPT Safe for Healthcare?