Cataract surgery is one of the most common surgeries in the U.S. Millions of people have it every year. After surgery, it is important to check if patients have any problems like infection, swelling, or vision troubles. Most clinics call patients or ask them to visit a few weeks after surgery to check on these issues.
A study from the United Kingdom looked at 225 patients from two public hospitals. They included 202 patients in the final results. About three weeks after surgery, patients got a phone call from an AI assistant called Dora R1. It asked about symptoms, decided the patient’s condition, and suggested if more care was needed.
Eye doctors watched the AI calls live to keep patients safe. The study compared Dora R1’s results with what the doctors decided. This helped see how well the AI works in finding patients who need more help after cataract surgery.
Sensitivity and specificity are two important ways to measure how good a test is:
Dora R1 had a sensitivity of 94%. This means it correctly found almost all patients who needed more care. High sensitivity is very important because missing a problem can be serious.
The AI’s specificity was 86%. It correctly identified many patients who were fine and did not need more visits. This helps reduce unnecessary appointments and lessens the work for staff.
The study showed that Dora R1 and the eye doctors agreed strongly. Kappa values ranged from 0.758 to 0.970. This means the AI made decisions similar to specialists.
Safety is very important when using AI in healthcare. The study found that out of 117 patients discharged by Dora R1, 11 (or 9%) later had changes in their care plan. But all these patients were also meant to be discharged by doctors, which lowers the chance of harm.
There were four patients that Dora R1 discharged but doctors did not. None of these patients needed extra care after follow-up checks. This shows that when AI is watched by doctors, it can be safe to use.
Also, 96.5% of calls (195 out of 202) were completed fully by the AI. This shows the technology can handle follow-ups well while keeping patients safe.
How patients feel about AI is very important for its success. The study interviewed 20 patients after their AI follow-up calls. Many patients accepted the technology for routine check-ups because it was convenient and fast.
Some patients worried about no real human being involved, especially if they had complex symptoms or problems. They liked the emotional support and understanding that human doctors provide, which AI cannot do yet.
For U.S. clinics, this means it might be best to use a mix where AI handles normal cases but humans step in when things are more serious or unclear.
One big advantage of using AI is saving money. The study showed staff costs saved about £35.18 (about $45) per patient compared to normal care.
This happened because fewer phone calls by doctors or nurses were needed. Staff could then focus on harder tasks. In busy U.S. eye clinics, where there are staff shortages, AI like Dora R1 could cut costs without lowering care quality.
AI systems like Dora R1 do more than just patient triage. They can help medical offices run more smoothly and reduce paperwork.
In U.S. clinics, front office jobs like scheduling, screening patients, and follow-up calls take many staff hours. AI bots can make these calls, letting staff spend more time with patients who need real medical help.
By adding AI to telemedicine, clinics can:
IT managers need to link AI call systems to electronic health records so all calls are saved and doctors can watch over the AI. It is also important to follow rules that keep patient information safe.
Though the study was done in the U.K., its results apply to the U.S. Both countries face similar problems with many patients needing follow-up after surgeries like cataract operations.
The AI call system proved to be reliable, safe, and cost-saving. U.S. eye clinics and other surgical fields can use AI to help deal with more patients and fewer staff.
Still, AI should be released carefully with steps like:
AI could also grow to help other surgeries and long-term disease care in the future.
| Parameter | AI System (Dora R1) Performance | Remarks |
|---|---|---|
| Sensitivity | 94% | High detection rate of patients needing care |
| Specificity | 86% | Accurate discharge of stable patients |
| Agreement with Clinicians | Kappa 0.758 to 0.970 | Strong clinical decision alignment |
| Safety | 9% unexpected changes; no adverse outcomes | Comparable safety to human clinicians |
| Call Completion Rate | 96.5% | High feasibility for autonomous calls |
| Patient Acceptance | Generally positive; concern over lack of human interaction in complex cases | Important consideration for workflows |
| Cost Savings per Patient | £35.18 (~$45 USD) savings | Economic benefits in staff resource allocation |
Clinic managers and IT staff in the U.S. can consider AI tools like Dora R1 for follow-ups after surgery. These systems help use resources better. IT staff must make sure AI is safely added to the telemedicine setup and follows rules about privacy.
Teams should plan how to handle cases needing a human doctor. AI follow-ups will not replace healthcare workers but can help them by doing first checks and calls. This could improve patient satisfaction with faster contact and let doctors focus on those who need extra care.
In short, this study shows that AI like Dora R1 can safely and accurately find patients needing more care after cataract surgery. It also saves money and matches well with doctor decisions. For U.S. clinics, using this kind of AI may help clinics work better while keeping patients safe and well cared for.
Dora R1 is designed to conduct autonomous telemedicine follow-up assessments for cataract surgery patients, identifying and prioritizing those who need further clinical input, thereby expanding clinical capacity and improving patient triage post-surgery.
The accuracy was assessed by comparing Dora R1’s decisions on clinical symptoms and need for further review against those of supervising ophthalmologists in a sample of 202 patients following cataract surgery.
Dora R1 demonstrated an overall sensitivity of 94% and specificity of 86%, showing strong alignment with clinical decisions made by ophthalmologists.
Dora R1 showed moderate to strong agreement with clinicians, with kappa coefficients ranging from 0.758 to 0.970 across assessed clinical parameters, indicating high reliability in clinical decision-making.
Safety was affirmed as no patients incorrectly discharged by Dora R1 required additional follow-up after a callback. Unexpected management changes were minimal and coincided with clinician recommendations, indicating safe clinical use.
Feasibility was shown with 96.5% of calls completed autonomously by Dora R1, while usability and acceptability were generally positive, although some patients expressed concerns about the absence of human interaction in complex cases.
Patients generally accepted routine AI follow-ups but worried about the absence of a human component in managing complications, indicating sensitivity to the emotional and clinical nuances of AI communication.
Dora R1 reduced staff costs by approximately £35.18 per patient, highlighting important economic advantages in resource allocation for routine post-surgical follow-ups.
The study recommends further real-world implementation studies involving larger and more diverse patient populations across multiple Trusts to validate safety, effectiveness, and generalizability.
Dora R1 evaluated the clinical significance of five key symptoms commonly monitored post-cataract surgery to decide if patients required further clinical review or could be safely discharged.