Artificial intelligence in primary care


Position

Artificial intelligence (AI) has great potential in general practice, but as these technologies become more advanced, the risks that they pose must be carefully mitigated. The RACGP:

  • considers that GPs must be involved in the development and integration of AI-based solutions in primary care, to ensure solutions are fit-for-purpose
  • encourages efforts to ensure that the sector is appropriately regulated including with post-market surveillance systems
  • is committed to support GPs to develop the skills needed to work with AI as required.

1. Definition

Artificial intelligence (AI) describes the machine simulation of human cognitive capabilities such as learning, reasoning or problem-solving, and self-correction.1 It encompasses a range of technologies, such as machine learning, deep learning, natural language processing, robotics, chatbots, image recognition and machine vision, and voice recognition.2, 3

Within the field of AI there is a differentiation between narrow (or ‘weak’) AI, which is able to fulfil a specific task; and general (‘strong’) AI, which is able to handle higher-level cognitive tasks such as decision making, planning, and contextual awareness.3 Narrow AI is currently being used across a range of applications in medicine and other domains, but general AI, which would mimic human intelligence, is at this stage a theoretical possibility only.

While the concept of AI has been around for many decades, its early incarnations involved ‘rule centric’ approaches, in which programmers explicitly developed logic for a machine to follow. Over time, AI development has shifted to ‘data centric’ approaches, in which a machine is able to learn using previously collected data.4 This is also called machine learning. Large language models (LLMs) are a type of AI system which use a massive volume of ‘training’ data to generate, translate and summarise text.

2. The promise of AI

There are a range of potential benefits to the use of AI in primary care. The range of uses has extended significantly since the 2023 introduction of ChatGPT-4, an LLM chatbot that can generate human-like responses to natural language inputs. LLMs are demonstrably able to create credible research papers, generate automated summaries of patient histories, assess clinical skills, and increase patient health literacy.5 LLMs have also performed well in answering medical examination questions.6 The use cases also extend to documentation, medical record keeping, business analytics, billing, patient engagement and clinical decision support. AI uses include:

  • reducing the burden of or increasing efficiencies in routine administration and clinical tasks, allowing GPs to focus on providing patient care3
  • filling service gaps in places where healthcare professionals are in short supply, such as developing nations or remote areas2
  • increasing patients’ access to care between visits and in their preferred language1
  • reducing risks to patient safety associated with fatigue and other human frailties, such as cognitive biases7
  • improving diagnostic accuracy and efficiency, particularly in the case of rare diseases8
  • standardising care across healthcare settings1
  • personalising treatment.9

3. Problems and unwanted outcomes

3.1. Safety, regulation and liability issues

Risks to patient safety present the largest obstacle to widespread use of AI in healthcare. An AI tool can provide unsafe advice because it has been poorly programmed, trained with inadequate or unrepresentative (biased) data, or is vulnerable to hacking.1 There is also a risk an AI tool is used in an inappropriate setting or context, which can skew the device’s decision-making ability over time.3 A current issue with large language models is 'hallucinations’, that is the answering of questions with data that is incorrect but is convincing in nature.

Technological advancement also has the ability to outpace regulation. AI technologies have varying levels of regulation in Australia and across the world and some have questioned the lack of rigour in assessing the safety and efficacy of AI tools, which sits in contrast to that applied to new treatments and procedures.1 Where regulation is lagging behind technological advancement, there is a risk that commercial developers could rush a product to market without sufficient testing or evidence to safeguard the public.

There are specific challenges to consider in regulating AI products. By definition, machine learning involves an evolution in a computer’s ability to learn from experience, so a device that uses this technology might be compliant with standards on one day but not on the next.1 AI might also be capable of re-identifying anonymised data by matching data from different data points.11 Another major issue is the ‘black box’ phenomenon. The complex decision making that takes place within a machine learning system is hard for a human to untangle and understand, and in some circumstances that process will be the proprietary product of the developer or technology company. This creates issues in assessing the reliability of the system and determining whether biases have compromised its decision making.

Hand in hand with the issue of regulation is the issue of liability. The finding of fault might become complicated where a clinician ignores a recommendation made by AI, or follows an AI recommendation but it results in a bad outcome.11 There are many ethical questions of this nature that require deep consideration. For example, should a GP have the right to override a machine’s diagnosis or decision? Should the reverse apply, and if so, to what degree?1

The RACGP recognises the important work being undertaken by the Therapeutic Goods Administration (TGA) in the regulatory space around AI and the category of ‘Software as a Medical Device’.

3.2. Exacerbating health inequality

There is concern that AI will propagate and magnify existing problems with health equity. Biases can be ‘baked in’ to AI products. They can be trained with unrepresentative datasets.1 For example, one concern is in the use of historical training data in which women and ethnic minorities are not included in the dataset.

3.3. Workforce issues

While many have envisioned a future in which human tasks will be completely subsumed by AI, it is unlikely that the complex work of a doctor will ever be performed by machines operating independently. AI cannot truly synthesise the value judgements and empathy of a doctor, but we are already beginning to see ways in which it can augment human skills.7, 12 GPs will continue to provide the societal, clinical, and personal context for diagnostics and treatment decisions.13

However, there are legitimate concerns about how AI will affect the delivery of primary care as it continues to advance. If AI takes on a significant role in patient interaction and decision-making, this shift might impact what it means to have a career in medicine.1 Just as AI has the potential to reduce the burden of administration, it might also decrease job satisfaction by stripping back the social elements of general practice and the stimulation of problem-solving in diagnostics.

Importantly, there is a need to ensure that AI does not create unnecessary and low-value work for GPs and general practice staff. There is potential for developers to create products that are intended to create efficiencies and improve effectiveness, but end up simply generating additional work for clinicians through task substitution, such as reviewing and editing clinical notes that have been automatically prepared by a digital scribe.14

Impacts for macro-level service delivery might also be on the horizon. For example, demand for GP services could increase if patients self-refer for care at the direction of an AI tool.1 This has the potential to further burden an already under-resourced sector and contribute to GP burnout.14

3.4. Trust and acceptance

A major impediment to the implementation of AI in primary care is a lack of trust in the technology. In addition to the concerns listed above, the lack of collaboration between professionals from the fields of medicine and AI contributes to this lack of trust and is a barrier to uptake.15 Without clinicians involved in the design, implementation and regulation of AI technologies, there is a risk that the healthcare system will inherit products that not only fail to solve existing problems, but create new and unforeseen ones in their wake.

AI developers need to work in partnership with GPs and primary care researchers. GPs must have a say in guiding and overseeing the integration of AI as end users of these technologies. Failure to engage in this process could have far-reaching consequences. It could result in systems or products that do not benefit either clinicians or patients.14, 15 Value to technology company shareholders might be prioritised over patient outcomes.15

There is a need to build research teams that consist of AI specialists, primary care researchers, and frontline clinicians and to foster collegiality between the two disciplines.4 Researchers should look to conduct evaluation studies within primary care settings and GPs should play an active role in formulating research questions so the studies’ conclusions have real-world applications.16

4. Policy response

4.1. Governance

Lawmakers, regulators, and professional bodies must keep pace with the development of AI to ensure it is implemented safely, particularly in the medical setting where the stakes are high.

The RACGP is keen to work with all stakeholders on the puzzle of how best to regulate AI so that these technologies can benefit Australian patients.

We recognise the need to strike a balance between regulation and innovation to advance the interests of the patient. Slow work in this arena or over-regulation could prevent the development of potentially life-saving technologies.10

There must be safeguards to protect patients after the product is released. Post-market surveillance will be critical, as systems that continuously learn from previously-collected data must be evaluated over time, in a dynamic way, to ensure they are still fit for purpose.

There may be a need to establish a new Australian regulatory body to fill gaps in existing government portfolios and legislative instruments.17

4.2. Education

GPs at all stages of the career cycle will need new skills to work with AI and keep pace with rapid developments in this field.18 Some argue that a comprehensive overhaul of medical education will be required, as the changes brought by AI will have far-reaching effects on the practice of medicine.19 What is certain is that medical students and postgraduates of the future will require an understanding of medical informatics, mathematical concepts, data science, and the ethics of AI.7 They will need tools for appraising new AI technologies for safety and efficacy.20 At the postgraduate level, there should also be a focus on relevant applications of AI in clinical practice.21

To get the most out of AI technologies for their patients, doctors will also need to maintain and build on the skills that separate them from machines – the ability to demonstrate empathy, to see the relationship between a patient and their illness, and to consider the emotional states of the patient.20, 21 Skills in communication, decision making, leadership, and team-based work will continue to be of pivotal importance to future GPs.

The RACGP is committed to support GPs to develop the skills needed to work with AI as required.

  1. Academy of Royal Medical Colleges. Artificial Intelligence in healthcare. London: AORMC; 2019.
  2. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Int Med. 2019;34(8):1626-30.
  3. Royal College of General Practitioners. Artificial Intelligence and primary care. London: RCGP; 2018.
  4. Kueper JK, Terry AL, Zwarenstein M, Lizotte DJ. Artificial intellgence and primary care research: a scoping review. Ann Fam Med. 2020;18(3):250-8.
  5. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595.
  6. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172-80.
  7. Summerton N, Cansdale M. Artificial intelligence and diagnosis in general practice. Brit J Gen Pract. 2019;69(684):324-5.
  8. Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: current trends and future possibilities. Br J Gen Pract. 2018;68(668):143-4.
  9. Topol E. The Topol review: preparing the healthcare workforce to deliver the digital future. London, UK: NHS; 2019.
  10. Dawson D, Schleiger E, Horton J, et al. Artificial Intelligence: Australia's ethics framework. Sydney, NSW: Data61 CSIRO; 2019.
  11. Pearce C, McLeod A, Rinehart N, et al. Artificial Intelligence and the clinical world: a view from the front line. Med J Aust. 2019;210(6):S38-40.
  12. Powell J. Trust me, I’m a chatbot: how artificial intelligence in health care fails the Turing test. J Med Internet Res. 2019;21(10):e16222.
  13. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence. JAMA: Journal of the American Medical Association. 2018;319(1):19-20.
  14. Coiera E. The price of artificial intelligence. Yearb Med Inform. 2019;28(1):14.
  15. Liaw W, Kakadiaris IA. Primary care artificial intelligence: a branch hiding in plain sight. Ann Fam Med. 2020;18(3):194.
  16. Liaw W, Kakadiaris I. Artificial intelligence and family medicine: better together. Fam Med. 2020;52(1):8-10.
  17. Australian Human Rights Commission and World Economic Forum. Artificial Intelligence: governance and leadership - White paper. Sydney, NSW: AHRC; 2019.
  18. McCoy LG, Nagaraj S, Morgado F, et al. What do medical students actually need to know about artificial intelligence? NPJ Dig Med. 2020;3(1):1-3.
  19. Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics. 2019;21(2):146-52.
  20. Rampton V, Mittelman M, Goldhahn J. Implications of artificial intelligence for medical education. Lancet Digit Health. 2020;2(3):e111-e2.
  21. Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ. 2019;5(2):e16048.

Download this position statment

Artificial intelligence in primary care (PDF 188 KB)

Advertising

Advertising