Healthcare providers across the United States, especially in areas like San Diego, have begun using AI technologies to aid clinical decisions, automate administrative tasks, and engage patients. For instance, UC San Diego Health employs a deep-learning system created by Dr. Gabriel Wardi that analyzes roughly 150 clinical variables in real-time to predict sepsis risk. This system helps save about 50 lives annually by enabling early sepsis detection, a condition responsible for millions of deaths worldwide.
In addition to predictive tools, AI assists with transcribing and summarizing patient visits, supporting genetic data analysis, and helping researchers develop new drugs. Companies like Illumina use AI to study rare genetic diseases, while Scripps Health applies AI to reduce the time physicians spend on documentation to just seven to ten seconds per patient, allowing them to focus more on caring for patients.
Even with AI adoption growing quickly—about one-third of the $6 billion invested in digital health startups in 2024 is going to AI-related companies—concerns remain regarding accuracy, patient privacy, and potential biases in decision algorithms.
AI systems process large amounts of data but are not flawless. They can repeat biases found in training data or make mistakes, especially if not supervised closely by humans. Human involvement is necessary to ensure AI outputs are accurate, ethical, and contextually suitable.
Each year in the U.S., about 12 million diagnostic errors occur, resulting in close to 800,000 deaths. AI shows potential to lower these errors, but clinicians must review AI-generated results carefully. Dr. Eric Topol from Scripps Translational Science Institute states that human supervision serves as a safety check against AI mistakes during diagnosis or treatment recommendations.
Doctors and healthcare leaders should create procedures to verify AI suggestions and remain accountable. This “human-in-the-loop” method addresses AI’s limits, such as its inability to fully grasp patient details, complex histories, or social factors important for customized care.
Ethical oversight requires human participation. Patients should be informed when AI is part of their care and give consent. Institutions like Scripps Health have adopted these practices to increase transparency and build trust in AI use.
Dr. Jeeyun (Sophia) Baik points out gaps in protecting patient data used by AI, emphasizing the need to ensure privacy and prevent biased outcomes. Human reviewers help set ethical standards, check AI outputs for bias, and intervene when AI suggestions conflict with social values or fairness.
AI algorithms may not adjust well to new clinical situations without human input. Experts bring the flexibility and understanding that current AI lacks. This is critical given patient diversity and the need to tailor AI to different groups.
As an example, the sepsis AI tool at UC San Diego worked better at Hillcrest Hospital than at the La Jolla site because it was modified to fit the local patient population’s characteristics. Such adjustments require collaboration between clinicians and data scientists.
AI affects not only clinical decisions but also healthcare operations. It can automate workflows to reduce administrative burdens, streamline documentation, and improve billing without losing accuracy.
Companies like Simbo AI provide AI-based phone automation and answering services, which help medical practices manage patient communication. AI chatbots and virtual assistants can handle appointment setting, answer questions, and perform initial patient triage. For administrators, this technology offers better operational efficiency and patient access.
While AI reduces the workload, human staff remain necessary to handle complex issues, oversee AI performance, and manage exceptions.
Medical coding is changing through AI use. AI-powered natural language processing examines Electronic Health Records (EHR) and assigns diagnostic and procedural codes more quickly and accurately than manual coding. This improvement speeds up billing and lowers claim rejections.
Human coders still play a key role in reviewing difficult cases, ensuring compliance, and auditing AI results. They need to develop new skills in data analysis and AI monitoring. Collaboration among clinical, administrative, and IT teams is important.
Revenue cycle management also benefits: AI shortens claims processing time and detects possible fraud using pattern recognition, helping healthcare providers comply with regulations and improve cash flow.
AI in diagnostic imaging is an example of how workflows can improve. Research shows AI increases the speed and accuracy of interpreting X-rays, MRIs, and CT scans, allowing quicker clinical decisions. AI also supports predictive analytics to create personalized treatment plans based on patient data.
These advances let clinicians see more patients or focus on complex cases. Proper training ensures staff can interpret AI outputs and intervene when needed, preserving care quality.
Although AI provides many advantages, careful management of risks like patient privacy, ethical use, and legal compliance is necessary.
Health data is sensitive and needs strong privacy protections. AI systems must follow laws such as HIPAA and newer regulations like California’s SB 1120, which sets safety and fairness standards for AI in healthcare insurance and services.
Healthcare administrators should work with IT teams to ensure data encryption, access controls, and monitoring systems prevent unauthorized access while allowing AI to use data lawfully.
Bias in AI can cause unequal health outcomes if training data is not diverse or reflects existing inequalities. Ongoing human review is needed to spot and reduce bias by adjusting models for the population served.
This requires cooperation between clinicians, IT staff, and AI vendors committed to fair healthcare delivery.
Current rules like HIPAA do not fully cover AI’s unique challenges, leaving gaps in governance. Experts suggest dynamic policies that combine fixed rules with continuous human oversight to adapt to new risks.
Healthcare organizations are encouraged to create AI governance groups including clinicians, administrators, data scientists, and legal experts. This approach aims to balance innovation with patient safety, ethics, and compliance.
Training healthcare workers on AI’s capabilities, risks, and ethical concerns is important so they can supervise AI properly.
Human involvement in AI use helps build trust among doctors and patients. Surveys show 83% of physicians believe AI will help healthcare providers, but about 70% have worries about AI in diagnostics. This reflects the need for transparency and human participation.
Human oversight reassures patients that AI assists rather than replaces medical expertise. It verifies decisions and puts them in context, which is vital when medical outcomes can be life-changing.
Healthcare organizations that include human oversight report better patient satisfaction and clinical results.
Experts expect AI to significantly change healthcare delivery in the U.S. over the next decade, similar to how antibiotics did in the early 20th century. Changes will include:
For healthcare administrators, owners, and IT managers, the future means balancing AI advances with strong human supervision to make sure benefits are safe and fair.
In conclusion, AI presents many opportunities to improve healthcare outcomes and efficiency. Yet collaboration between AI and human professionals remains crucial. Through ongoing oversight, ethical management, and thoughtful integration, healthcare systems in the United States can use AI effectively while protecting patient safety and trust.
Clinics in San Diego, like UC San Diego Health and Scripps Health, are early adopters of AI because it has the potential to improve diagnoses, manage patient data, and enhance the overall healthcare experience while saving significant time for healthcare providers.
AI is used for predicting sepsis risk, transcribing appointments, summarizing patient notes, generating post-exam documentation, and identifying conditions from images, among others.
AI tools have helped reduce documentation time, allowing physicians to spend more time with patients, thereby rehumanizing the examination experience.
Concerns include data privacy issues, potential job displacement, the accuracy of AI predictions, and whether patients are aware when AI is used in their care.
AI models analyze approximately 150 variables in near real-time from patient data to generate predictions on who may develop sepsis, significantly improving early detection.
Investors are increasingly funding AI in healthcare, with a third of nearly $6 billion in digital health investments going to AI-driven companies, signaling confidence in the technology’s future.
Ethical concerns focus on whether patients fully understand AI’s role, the protection of their health data, and how AI decisions may affect treatment recommendations.
Addressing algorithmic bias involves using diverse data sets tailored to specific populations, which can help enhance the accuracy of AI applications and reduce disparities in care.
Human oversight is crucial in using AI; clinicians must review AI-generated content to ensure accuracy and appropriateness in patient care, preventing potential errors.
Experts project that AI will dramatically change healthcare delivery within the next decade, potentially improving diagnosis accuracy and reducing medical errors significantly.