Search
Search
Close this search box.

Patients Should Be Informed How Artificial Intelligence Is Used in Their Care – Renal and Urology News

If you’re a sports fan, you’ve probably noticed the increasing use of technology as part of the game. From cricket to soccer to tennis, artificial intelligence (AI) technology is used as an adjunct to referee and umpire calls. It’s been used in tennis for the last 10 years. High-speed cameras and tracking systems help officials determine if a tennis shot is out, a goal has been scored, and for the cricket fans out there, if there was a “leg before wicket.” 

Although some fans may have mixed feelings about how technology can disrupt the tradition and purity of the game, both the fans and players now overwhelmingly accept any AI line call, and play carries on without further dispute. For health care professionals, understanding how fans react to the use of AI technology in sports can guide us in helping patients appreciate the role, process, and limitations of medical science in health care decision-making.    

Hawk-eye is the proprietary technology increasingly used in professional tennis matches like in the US Open in New York City, where it has replaced all human line judges (the match umpire remains).1 Hawk-eye works by using 8 to 10 high-speed cameras positioned around the court to track and record the path of the tennis ball. The AI software then aggregates, combines, and interprets the incoming image data to predict the most likely path of the ball as it either travels in or out of bounds. 

What may account for the increasing acceptability of AI in sports may be due in part to the public’s belief that technology is less prone to error and is more accurate than human beings in assessing line calls. Like any statistical analysis using generated data, however, Hawk-eye operates with some confidence intervals that bound its final determination of whether a ball is “in” or “out.”  For example, if Hawk-eye’s 95% confidence interval is +/- 3mm, then the determinations it makes within that interval of the line boundary may be at best uncertain, and at worst, wrong.  Adding to the uncertainty, tennis balls are fuzzy, and boundary lines are not perfectly straight (especially on grass), which suggest that the accuracy of any assistive technology may be imperfect.  


Continue Reading

What does this mean for talking with patients about scientific uncertainty when their understanding of the scientific process is low and their expectations of technology may be high. To give an example, consider a 58-year-old patient without cardiac risk factors who wishes to reduce his risk of heart attack by taking a daily aspirin. Based on a range of population-based studies, 333 patients need to take a daily aspirin to prevent one myocardial infarction (MI). At the same time, 250 patients need to take an aspirin before 1 person will be harmed with a major bleeding event (that number is closer to 1000 patients for an intracranial bleed).2 How do we counsel patients who are primed to believe that taking aspirin will reduce their risk of MI, when in reality, there is considerable uncertainty about which patient will benefit – indeed 333 patients will have to take aspirin daily before 1 patient will benefit. Putting aside how patients choose to weigh the individual harms and benefits of a daily aspirin, a 1:333 probability means a lot of patients taking aspirin will not necessarily benefit. 

Ultimately, the gap of uncertainty that is generated between what science produces and reality, whether it is in tennis or health care, should be addressed with greater transparency of the process. Being transparent about the specific limitations of medical science and technology allows the public to more accurately judge their value and understand their proper role in their care. There is nothing wrong with the concept of uncertainty, except when we flatly deny the existence of it or choose to hide it. There is limited reason to believe that sharing known areas of uncertainty with patients will make them less confident in their health care or the people that provide it.

At this current stage of AI in health care delivery, its use is limited to decision-support rather than being the decision-maker. Clinicians and patients can and will benefit from the use of this advancing technology to improve diagnosis, treatment, and prognostic determination. But the data it generates and the decisions it informs still belong to the clinicians and patients working together as part of shared decision-making. Therefore, health care professionals should inform patients about the use of scientific evidence and AI in their care, be open about both its benefits and limitations, and describe how they will use that information to provide better care. Health care decisions may be more complicated than tracking little fuzzy yellow balls, but improving public understanding of the scientific processes that inform them is a step in the right direction.

David J. Alfandre MD, MSPH, is a health care ethicist for the National Center for Ethics in Health Care (NCEHC) at the Department of Veterans Affairs (VA) and an Associate Professor in the Department of Medicine and the Department of Population Health at the NYU School of Medicine in New York. The views expressed in this article are those of the author and do not necessarily reflect the position or policy of the NCEHC or the VA.

References

  1. Collins H, Evans R.  You cannot be serious! Public understanding of technology with special reference to “Hawk-Eye.” Public Understand Sci. 2008;17:283-308.
  2. Mahmoud AN, Gad MM, Elgendy AY, Elgendy IY, Bavry AA. Efficacy and safety of aspirin for primary prevention of cardiovascular events: a meta-analysis and trial sequential analysis of randomized controlled trials. Eur Heart J. 2019;40:607-617.