Artificial Intelligence Can Bring Real Ethical Concerns to Healthcare

August 20, 2021

// By Jim Samuel //

Increased use of artificial intelligence (AI) in healthcare has brought many benefits to healthcare providers and patients. But despite the benefits, there are ethical concerns — unintentional or malicious — that any healthcare organization that embraces AI should address.

The Peter Parker Principle says, “With great power comes great responsibility.” It’s a phrase often attributed to Spiderman’s father as he talked about the superhero’s powers. But it could be equally applicable to the use of artificial intelligence (AI) in healthcare today.

That’s because while AI can bring significant capabilities to healthcare, it can also create ethical dilemmas when not managed responsibly and correctly. The question is: What are the ethics of AI and who is responsible for making sure they are met?


Jim St. Clair, chief trust officer, Lumedic

“We need to think about what the ethics and integrity of AI and healthcare are when AI is being inserted unknowingly or unwittingly into the patient engagement process,” says Jim St. Clair, chief trust officer at Lumedic, a Tegria company, that specializes in digital trust and improving processes in healthcare.

“Even if we feel that the ethical model algorithmically that is being provided by AI is sound, it is the way we choose to use it instead of a human-to-human interaction that introduces other ethical considerations,” St. Clair says. “We’re already comfortable with AI when I have an inquiry about my bank account, or I need to check on the status of my Home Depot order, but it’s entirely different if we’re inquiring about medical conditions.”

This content is only available to members.

Please log in.

Not a member yet?

Start a free 7-day trial membership to get instant access.

Log in below to access this content: