Monday, April 13

AMA says doctors should be at the forefront of AI


The inherent risks of AI necessitate ongoing oversight and governance, and physician expertise can’t be substituted, AMA says.

The American Medical Association is taking a cautious approach to artificial intelligence implementation in healthcare, suggesting to lawmakers that it would be prudent to prioritize education, physician oversight and data security.

The AMA refers to AI as “augmented intelligence” to emphasize that the technology should enhance human intelligence, not replace it.

One of the AMA’S central policy recommendations is that physicians should remain at the forefront of decision-making when it comes to AI adoption, and be actively engaged in reviewing and validating AI outputs to maintain both clinical accuracy and patient safety.

According to the AMA, there are a number of things to consider.

Physicians should be full partners at every stage of the AI lifecycle, the group said, from AI design to clinical integration, since they’re uniquely qualified to determine whether an AI tool is valid for a given indication.

There should also be a coordinated, transparent, whole-government approach, the group said. There needs to be clarity and consistency for developers, deployers and end users, including physicians and patients. Fragmented or duplicative rules would slow innovation and confuse clinicians; federal entities will have to act together to create a coherent oversight system.

What’s needed, in the AMA’s view, is secure data that is free from bias. The group said strong deidentification and consent safeguards are needed, and advocates strong governance to ensure data privacy and security, including transparency to patients and physicians about how data is used and protected.

The AMA also emphasized the importance of upskilling for the physician workforce, which it said will help increase understanding and allow physicians to appropriately assess AI tools. 

“This expertise cannot be substituted and is essential for determining whether AI technologies meet the high standards required for healthcare,” the AMA said in a statement submitted last month to the Senate Health, Education, Labor and Pensions (HELP) Committee for its hearing, “AI’s Potential to Support Patients, Workers, Children and Families.”

“Physicians must play a central role in the ethical development, deployment and utilization of AI technologies in healthcare, as their clinical expertise is indispensable in ensuring these tools are safe, effective and trustworthy,” the AMA said. “Establishing strong digital health foundations, including telehealth infrastructure, robust privacy and security protections and seamless interoperability, is critical to enabling this transformation.”

WHAT’S THE IMPACT 

When AI-powered technologies are implemented properly, the AMA said, they “hold significant potential to enhance patient-centered care, improve clinical outcomes and reduce costs.”

But the AMA emphasized that the inherent risks of AI technology necessitate ongoing oversight and governance, noting that physicians’ expertise “cannot be substituted and is essential for determining whether AI technologies meet the high standards required for healthcare.”

Calling for clarity and consistency for developers, deployers and end users – including physicians and patients – the AMA said federal agencies should act together to create a coordinated and coherent oversight ecosystem, with an eye toward minimizing fragmented or duplicative rules.

What is needed, the group said, is secure data that’s free from bias. Meanwhile, investments in physician education – in medical school and through continuing medical education (CME) for practicing physicians – will help increase understanding and allow physicians to appropriately assess AI tools, the AMA said.

The group said that its new Center for Digital Health and AI, launched in October, will address these key areas, creating opportunities for physicians to shape AI and digital tools so they work within clinical workflows, and enhance patient and clinician experience.

The center will also facilitate education and training, as well as policy and regulatory leadership – working with regulators, policymakers and technology leaders to shape benchmarks for safe and effective use of AI in medicine and digital health tools.

THE LARGER TREND 

Eighty-eight percent of health systems are using artificial intelligence internally, but just 18% have a mature governance structure and fully formed AI strategy, according to an August report from the Healthcare Financial Management Association and market research company Eliciting Insights.

Governance is lacking despite the fact that 71% of survey respondents have identified and deployed pilot or full AI solutions in finance, revenue cycle management or clinical functional areas.

The use of artificial intelligence in healthcare is gaining popularity among physicians, found a February survey from the American Medical Association, though many remain guarded in their enthusiasm due to lingering concerns.

 

  The HIMSS AI & Cybersecurity Virtual Forum is free to attend on Nov. 18. Learn more and register.
   

Jeff Lagasse is editor of Healthcare Finance News.

Healthcare Finance News is a HIMSS Media publication.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *