+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

ChatGPT can save lives in the ER, but it needs supervision: 'It is at once both smarter and dumber than any person you've ever met'

Apr 7, 2023, 17:33 IST
Insider
GPT-4 (Generative Pre-trained Transformer 4) is a successor to ChatGPT. It was released to a limited audience on March 14, 2023.Jaap Arriens/NurPhoto via Getty Images
  • GPT-4 is the latest AI technology released from OpenAI.
  • It's more advanced then GPT-3, and can help translate, summarize & process medical information.
Advertisement

GPT-4 is the newest and most advanced version of an artificial intelligence model available from OpenAI — makers of the wildly successful ChatGPT product — and doctors say it could upend medicine as we know it.

While we already knew that previous GPT versions 3.0 and 3.5 could get solid scores on the MCATs, now experts say that GPT-4 may also be capable of saving human lives in the real world, treating emergency room patients quickly and with finesse.

In the forthcoming book "The AI Revolution in Medicine," available as an e-book April 15 or in print May 3, a Microsoft computer expert, a doctor, and a journalist team up to test drive GPT-4 and understand its medical capabilities. (Microsoft has invested billions into OpenAI, though the authors of this book say it was written with editorial independence.)

The three experts — Microsoft vice president of research Peter Lee, journalist Carey Goldberg, and Harvard computer scientist and doctor Isaac Kohane, say this new AI, which is available only to paid subscribers for now, is more advanced and less silly than the previous chatbot. And it's so good at digesting, translating, and synthesizing information that they say it could be used in emergency rooms to save time and save lives — today.

"We need to start understanding and discussing AI's potential for good and ill now," the book authors urge. In fact, they suggest, it probably already is being used in some medical settings, whether we know it or not.

Advertisement

How GPT-4 could save a life

In a Friday, May 6, 2016 photo, Medical resident Dr. Cameron Collier briefs a group of medical residents and medical students as they visit with a patient.Gerald Herbert/AP Images

In the opening pages of the book, the authors offer a hypothetical — but entirely possible — interaction between a medical resident and GPT-4 as evidence that the technology will most certainly be used by both doctors and patients soon.

It starts with an imagined patient in critical distress, his heart rate soaring, his blood pressure tumbling, his face turning pale, then blue, as he gasps for air. His care team inserts "syringe after syringe" into his IV, trying to boost his blood pressure, and improve his heart function, but nothing seems to be working.

A second year medical resident whips out her phone and opens the GPT-4 app, asking the AI for advice. She explains to the bot that this patient "is not responding" to blood pressure support, and mentions his recent treatment for a blood infection. Finally, she pleads with the artificial intelligence, "I don't know what is happening and what to do."

Instantly, the bot responds with a coherent paragraph explaining why the patient might be crashing, mentioning relevant recent research, and suggesting a white blood cell boosting infusion treatment. The resident realizes the AI is implying that this patient could be going into life-threatening sepsis. If that's the case, he needs that medicine, fast.

The resident quickly orders the AI-suggested infusion from the hospital pharmacy, and then — critically — double checks what the bot told her, saying "show me" the study into her phone.

Advertisement

"She somehow felt like a benevolent mentor-servant with access to nearly all the world's medical knowledge was holding her hand," the authors imagine in the book. After the medical resident fills the patient's prescription, she again uses the AI to automate required paperwork for his insurance, a major time-saver.

"In almost any way you can name, from diagnosis to medical records to clinical trials, its impact will be so broad and deep that we believe we need to start wrestling now with what we can do to optimize it," the book says of GPT-4.

In recent weeks, other experts have expressed a similar brand of fresh excitement, coupled with terror, about the prospect of AI being used in every corner of medicine.

"It is indeed a very exciting time in medicine, when the term 'revolution' is a reality in the making," physician Eric Topol wrote on his blog when he reviewed the new book.

GPT-4 isn't always reliable in medical settings

GPT-4 might sound like the future of medicine, but there's a catch. GPT-4 can still make mistakes, and sometimes its responses include subtle inaccuracies in otherwise sound medical advice. The experts stress that it should never be used without human supervision.

Advertisement

The wrong answers that the AI gives "almost always look right," the book says, and could be perceived as convincing and reasonable to the untrained eye — but could ultimately hurt patients.

The book is filled with examples of GPT-4s blunders. The authors notice GPT-4 still makes things up when it doesn't quite know what to do.

"It is at once both smarter and dumber than any person you've ever met," they write.

GPT-4 also makes clerical mistakes, like copying things down wrong, or making straightforward mathematical errors. Because GPT-4 is a machine learning system, and was not actively programmed by humans, it's impossible for us to know exactly when and why it's getting these things wrong.

One potential cross-check the authors suggest to readers to help combat errors in the system is to ask GPT-4 to check over its own work, a tactic which sometimes works to reveal mistakes. Another is to command the bot to show you its work, so you can verify its calculations, human-style, or ask the bot to show you the resources it used to make its decision, like the medical student did in the hypothetical situation.

Advertisement

"It is still just a computer system," the authors conclude, "fundamentally no better than a web search engine or a textbook."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article