Post-surgical follow-up calls are common in many medical areas to check how patients are recovering. They help find problems early and guide patients on what to do next. Surgeries like cataract removal, heart operations, and bone repairs need timely follow-up to improve patient results and satisfaction.
Right now, staff like nurses or office workers spend time calling patients. This takes a lot of work, affects clinic schedules, and costs more money. People can differ in how they judge symptoms during these calls, which might lead to inconsistent results. While some complicated cases need a doctor’s expertise, simple check-ins could be handled by machines without losing quality.
Medical managers in the U.S. have to balance good care, using resources wisely, and controlling costs. This is hard when there are fewer staff and tight budgets.
A recent study from the UK looked at using an AI helper called Dora R1 to make follow-up calls after surgery. The study was funded by a national health organization and involved 225 patients at two public hospitals. The AI made 202 follow-up calls after cataract surgery.
Dora R1 asked patients about specific symptoms about three weeks after their surgery. The AI decided if the patient needed to see a doctor again or could be discharged. Eye doctors watched the calls to keep them safe and check the AI’s work.
Most patients accepted AI calls for simple cases, but some worried about no human contact in harder situations. This suggests AI is best for routine follow-ups.
Medical leaders and IT staff in the U.S. want to improve how clinics run while sticking to budgets. The savings shown by Dora R1 in the UK give useful ideas for U.S. healthcare providers facing similar challenges.
Nurses or office staff spend a lot of time making follow-up calls in normal clinics. This adds up to high costs when many patients need calls. The UK study showed a saving of about $43 per patient in labor costs.
For a medium-sized surgery practice with many patients, using AI for calls could cut payroll costs a lot. The saved money could go to other parts of care or new patient services.
When AI handles routine calls, doctors and staff can focus on hard cases. This may improve how patients are checked, cut down wait times, and lower problems caused by late follow-ups.
AI calls can be made at flexible times, even outside normal working hours, which helps patients and does not cause staff overtime.
The Dora R1 AI did not miss any patients who later needed extra care. Minor differences in care were checked and cleared by doctors. This shows AI can be safe to use.
In the U.S., where malpractice suits happen, AI with doctor oversight is needed. AI tools should support clinical judgment, not replace it.
AI follow-up systems can handle many patients without extra work. Both small clinics and big hospitals in the U.S. can add AI assistants into their electronic health records and telehealth. They can change call questions and rules to fit surgeries and local practices.
AI follow-up calls are part of a trend to automate tasks in healthcare. Such automation helps teams save time on repetitive work and focus on important tasks.
Dora R1 and similar AI programs do several key jobs:
The U.S. healthcare system faces staff shortages and paperwork problems. Adding AI can make operations smoother, patient care better, and costs lower.
Though Dora R1 results are from the UK, U.S. providers need to consider some points when using AI follow-ups.
U.S. healthcare has strict rules about patient privacy and safety. AI systems must follow laws like HIPAA and be thoroughly tested before use.
AI follow-ups should work well with current electronic health records, telehealth, and patient portals. This will help staff use them without trouble.
Even if AI calls work well for simple cases, patients need to be taught and reassured, especially older people or those with complex health issues, about the lack of human contact.
Small clinics and large hospitals have different cost needs. Each must check their salary costs, patient numbers, and admin expenses to see if AI follow-ups save money for them.
Using AI-powered follow-up calls in post-surgical care is a practical way for U.S. clinics to save money, improve workflows, and keep patients safe. The Dora R1 study gives useful examples of accuracy, ease of use, and cost savings that U.S. health leaders can think about when choosing AI tools for after-surgery care.
Adding these tools while keeping doctor oversight and good patient communication will help make AI a useful and lasting part of regular healthcare.
Dora R1 is designed to conduct autonomous telemedicine follow-up assessments for cataract surgery patients, identifying and prioritizing those who need further clinical input, thereby expanding clinical capacity and improving patient triage post-surgery.
The accuracy was assessed by comparing Dora R1’s decisions on clinical symptoms and need for further review against those of supervising ophthalmologists in a sample of 202 patients following cataract surgery.
Dora R1 demonstrated an overall sensitivity of 94% and specificity of 86%, showing strong alignment with clinical decisions made by ophthalmologists.
Dora R1 showed moderate to strong agreement with clinicians, with kappa coefficients ranging from 0.758 to 0.970 across assessed clinical parameters, indicating high reliability in clinical decision-making.
Safety was affirmed as no patients incorrectly discharged by Dora R1 required additional follow-up after a callback. Unexpected management changes were minimal and coincided with clinician recommendations, indicating safe clinical use.
Feasibility was shown with 96.5% of calls completed autonomously by Dora R1, while usability and acceptability were generally positive, although some patients expressed concerns about the absence of human interaction in complex cases.
Patients generally accepted routine AI follow-ups but worried about the absence of a human component in managing complications, indicating sensitivity to the emotional and clinical nuances of AI communication.
Dora R1 reduced staff costs by approximately £35.18 per patient, highlighting important economic advantages in resource allocation for routine post-surgical follow-ups.
The study recommends further real-world implementation studies involving larger and more diverse patient populations across multiple Trusts to validate safety, effectiveness, and generalizability.
Dora R1 evaluated the clinical significance of five key symptoms commonly monitored post-cataract surgery to decide if patients required further clinical review or could be safely discharged.