AI triage tools use natural language processing and machine learning to study patient symptoms and clinical data. They automatically sort cases by how urgent they are. These tools help clinical staff by making routine checks easier, making triage decisions more consistent, and spotting serious cases faster.
Several recent studies have looked at AI’s role in triage. They tested AI models like ChatGPT (versions 3.5 and 4o) and compared these models to experienced emergency doctors and nurses. These studies used known triage systems such as the Canadian Triage and Acuity Scale (CTAS) and the Manchester Triage System. These systems split patient conditions into different urgency levels.
Research shows that AI does well in some parts of triage but also has limits. It can help with decision-making but can’t replace human clinical judgment yet.
A study at the University of Zagreb, led by Dorian Fildor and Mirjana Pejić Bach, tested how well ChatGPT sorts ear, nose, and throat (ENT) patient cases by urgency. They compared the AI’s sorting with that of an experienced hospital doctor. ChatGPT could separate urgent and non-urgent cases moderately well. But it sometimes gave unclear answers. This means AI can’t fully replace clinical experts now.
The study showed that AI can help reduce the workload in triage. This gives healthcare workers more time to focus on patients. But AI needs careful supervision because it might miss some complex clinical details.
A U.S. study by Ahmad A. Aalam, MD, tested ChatGPT-4o against four expert emergency doctors. They used 60 emergency situations and the Canadian Triage and Acuity Scale (CTAS). CTAS has five urgency levels:
ChatGPT-4o mostly agreed with the human doctors. It had a Cohen’s kappa score of 0.695, showing good agreement but also room for improvement. At the highest urgency (Level 1), the AI caught all emergency cases correctly (100% sensitivity) and had high accuracy in avoiding false alarms (97.67% specificity). For Level 5 (nonurgent), it also caught all true cases with 100% sensitivity and 93.02% specificity.
But at Level 4 (less urgent), the AI was less accurate, with 50% sensitivity. This shows AI finds it harder to spot moderately urgent cases. This makes sense because these cases can be complex and need a doctor’s detailed understanding.
Dr. Aalam said AI can’t replace doctors yet but is useful in spotting critical patients fast. It can help teams focus on urgent cases and use resources better.
Another study from Vilnius University Hospital compared ChatGPT 3.5 to six emergency doctors and 51 nurses using the Manchester Triage System. They tested 110 cases and found:
For spotting urgent cases, sensitivity scores were:
The AI did better than nurses in the most urgent category (level 1), with accuracy of 27.3% vs. 9.3%, and specificity of 27.8% vs. 8.3%. This means AI tends to mark more cases as urgent, which might prevent missing serious patients but could use resources less efficiently.
Researchers, including Dr. Renata Jukneviciene, said AI should help clinical staff, not replace them. They warned that too much over-triage by AI can cause problems and said human review is needed to check AI results.
Healthcare leaders in the U.S. must think about both the good and bad when using AI triage tools. AI can help workflows and support decisions, but it needs strong supervision by clinicians.
Important points for U.S. medical centers are:
U.S. healthcare managers should keep these points in mind when thinking about AI triage. They need to balance new technology with training and way of working changes.
Automating triage and improving front-office work are key efforts by companies like Simbo AI. Simbo AI creates AI tools for phone automation and answering services made for healthcare. Their systems handle patient calls by screening, gathering symptom details, and sending urgent calls to medical staff fast. This helps lower the load on office workers and speeds up response to patients.
For U.S. healthcare centers with many incoming calls and limited staff, AI automation offers these benefits:
Healthcare administrators and IT staff should see how front-office AI tools can work with electronic health records (EHR) and other systems. Making sure these tools connect well and let clinicians supervise is important for safety and quality.
Even though AI shows promise, U.S. hospitals face some challenges when bringing in this tech:
Hospitals should create rules that mix AI help with ongoing doctor checks. Testing AI in some departments first and watching results closely can guide wider use.
The studies show AI triage tools like ChatGPT can help healthcare teams quickly check how urgent patients are. In emergencies, AI spots critical cases fast. Still, AI has trouble with tricky cases and needs supervision to avoid wrong urgency ratings.
For U.S. healthcare managers running busy practices or emergency rooms, adding AI phone automation together with clinical triage tools can cut workloads, improve patient flow, and use resources better. Simbo AI’s phone systems fit this need well by supporting patient communication and early screening before a clinical check.
Still, AI should be used carefully, with clear ideas about what it can and can’t do now. Training staff and watching over AI use remain essential for safe and effective healthcare.
This review of AI and clinician triage accuracy offers helpful advice for U.S. healthcare leaders thinking about AI. Using proven AI tools while keeping skilled human staff in charge of care helps healthcare centers improve how they work without risking patient safety or quality.
The primary objective is to assess ChatGPT’s ability to categorise patient conditions as urgent or non-urgent to aid in automating and digitalising healthcare triage, thereby reducing healthcare professionals’ workload.
Patient cases were presented to ChatGPT, which categorised urgency; these categorizations were then compared with those assigned by an experienced hospital doctor to evaluate ChatGPT’s accuracy.
AI can streamline patient care by supporting triage decisions, ensuring timely treatment allocation, and allowing healthcare professionals to focus more on direct patient care, thereby improving efficiency and outcomes.
The results showed uncertainty in ChatGPT’s ability to provide reliable medical advice, indicating it cannot yet fully replace expert clinical judgment in triage decisions.
Collaboration ensures that triage categorizations are clinically validated, enabling a reliable comparison between AI and expert assessments for accuracy evaluation.
AI’s integration can optimise medical services, enhance patient experiences, and promote the digitalisation of healthcare processes systematically and efficiently.
Initial assessment and categorisation of patient urgency in ENT and other domains, improving workflow by automating routine triage procedures.
Challenges include AI accuracy, trustworthiness, ethical concerns, interpretability, and the risk of erroneous medical advice without sufficient validation.
AI can alleviate administrative burdens by automating triage, allowing staff to concentrate more on direct clinical care and complex decision-making.
Further exploration and improvement of AI accuracy and reliability in clinical contexts, along with ethical frameworks, are necessary to effectively integrate AI agents in healthcare triage systems.