
In HIMSS25 that will take place at the Venetian Sands Convention Center in Las Vegas, the discussion in the IA Preconferences Forum on Monday morning resorted to the crucial team of questions about how to involve doctors and others, in the adoption of artificial intelligence (AI) in patient care organizations.
The first morning panel, entitled «Navigation of the integration of AI through the management of change and the inclusion of the workforce», was moderated by Attila Hertelendy, Ph.D., of the Florida International University. Spencer Dorn, MD, MPH MHA, from the University of North Carolina in Chapel Hill; Irene Louh, MD, adult intensivist at Baptist Health in Jacksonville, Florida; Mark Sendak, MD, MPP, of the Duke Institute for Health Innovation in Durham, NC; and Scott Hadaway from Servicenow.
Hertelendy asked Dr. Dorn about his hopes for AI in terms of improving work fixations and productivity of first -line doctors, nurses and other doctors. “That is one of the great hopes: we have this magical technology; Can we apply it so that it relieves load and heavy work? Dorn said. “In many ways, I am optimistic. But we have to be sensible and realize that some load could be relieved, and some new loads can also be added. «
«The AI is very promising for medical care, for our workforce and equipment,» said Dr. Louh. The core of the medical care provider is that we want to take care of our patients and really improve patient’s health. Over time, medical care has made it more difficult due to structure and function, so in any way that we can really relieve that load is important; There are many opportunities taking advantage of AI, so this is a really exciting moment to be in health and health. «
Dr. Sendak emphasized that “he would say that most of the cases of use in which I have worked, putting in clinical practice, try to relieve part of the clinical load, for the first -line doctors. Then, one of the first cases of use for us was to identify gaps in care for patients with augmented renal disease and other chronic diseases, trying to help the DOC of primary care to control care and make sure that people receive references, recipes, etc.; as well as identifying emerging sepsis. «
«How do we create strategies to involve our employees, to avoid skepticism and participate in trust?» Hertelendy asked the panelists.
«First line workers ought Be skeptical of AI, not necessarily cynical, but skeptical; Everyone has promised us many things in the past, ”said Dorn. “I don’t think we should expect doctors to run to this with open arms. Second, AI is a kind of meaningless term at this time, with so many different technologies discussed at the same time, that some basal education could go a long way. And third, aligning around a common goal. Why are we committed to these technologies?
«I feel that there are some different camps» in their health system, said Louh. «There is the camp, they have sold me something that sounds great, and some people are idealistic that will solve all the evils of the world; there is the very skeptical group, which is also burned by technology, as with EHR. And I echo Spencer in this: education and consciousness is an area where we have seen benefits through transparency. We have implemented LLM for drafts of responses; that is common now. of the team to know that this will take the work and the association to work.
Answering a question about the anxiety that many doctors have at this time, Hathaway said: “Scott Hathaway: doctors appear with a great burden on their backs. And now they have to talk to an AI that may believe that it is smarter than they are or have access to more information. And he feels like a black box. And we have to be able to provide transparency «to how AI really works.
«Are you listening to concerns about the loss of employment?» Hertelendy asked. «We take a step back,» said Dr. Sendak. “I have confidence, we are looking at a nine -digit deficit in our organization. But it will not be, will I take my job, but instead, my work will be eliminated because AI will be used when people are eliminated? I am married to a first -line primary care doctor. We are in a serious shortage of behavioral medical service, ”among others, he said.
«There is another piece and minimizes,» Louh said. “We have a nursing deficit in this country; We have a doctor and a supplier deficit in this country. And in a way, we have no choice. It is real: people are worried about losing their jobs. And change is difficult for people. And can we think of AI in some way, to really solve some of these problems? At the end of the day, we are all human, and we need investment and architecture to solve this. ”
“I think less in replacing health workers, although there is a risk of certain highly repetitive tasks that machines can approach; But it is more likely that everyone continues to work, but the nature of our work will change. ”Dorn said. And he continued saying that «one of my favorite studies of Jama Last year he discovered that models can overcome doctors, but it turns out that most doctors were using large language models such as search engines, but they are not really search engines. Therefore, we need to help people understand that this is a different class of technologies; Having some basic literacy education would help. «
«And how do you create space for your team members who are loaded and where does that fit our organization?» Louh said. “About two months ago, we trained all our nurses in our EHR, in which we had been live for about two and a half years. We wanted to help them up level how they use the EHR. It required space, time and money. It was very useful and useful, but required the C-Suite level commitment. But the documentation time for our nursing staff decreased and made them happier; They understood the tools better. And we have to do that regarding AI. Simply take the basic predictive model for sepsis: What is it for? What isn’t it for? How do you use it and think critically about what you are seeing? That type of concepts are really important. «
“How can we build solutions for our frontline doctors? And it is not realistic to think that each primary care doctor should be doing independent due diligence in the algorithms. There is a behavioral health crisis among our young people, so that is not something that first -line doctors should be doing. I have seen a positive domain effect, where we will create an algorithm for a particular use case, and then other groups will adopt similar strategies. And that is the classical innovation strategy. And at the national level, we are seeing a massive digital division, with perhaps a few dozen organizations, Duke, UNC, Presbyterian in New York, we are on a network and we are advanced. But how do we help security network hospitals, critical access hospitals, federally qualified health centers, how do we help them adopt technology? And how do we help leaders to make decisions to help their first -line caregivers? Help patient care organizations throughout the United States health system to effectively adopt AI will be crucial, he emphasized.