Recent data shows a notable change in how generative AI technologies are being adopted throughout the healthcare sector. A McKinsey survey from Q1 2024 reports that more than 70% of healthcare organizations in the United States—covering payers, providers, and technology groups—are either working on or have already integrated generative AI into their operations. This shift indicates growing acceptance among healthcare leaders that generative AI is a practical tool for optimizing workflows and improving care delivery.
Most organizations are still in the proof-of-concept stage, weighing the potential benefits against risks, costs, and regulatory issues. Of note, 59% of those implementing generative AI said they have partnered with third-party vendors to create tailored solutions. Only 17% chose to buy off-the-shelf products. This preference points to the complexity of U.S. healthcare settings and the need for adaptable AI systems that fit existing infrastructures and comply with regulations.
Cross-functional collaboration means teams from different departments or areas of expertise—such as clinical staff, IT personnel, administrators, and external vendors—work together to design, develop, and implement generative AI tools. Instead of allowing a single team or a leadership mandate to lead technology adoption, collaboration helps build a fuller understanding of workflows, clinical issues, and patient needs.
By working with specialized AI vendors, healthcare organizations can develop solutions closely matched to their needs. This teamwork mixes healthcare knowledge with technical skills, making sure AI supports both administrative efficiency and clinical quality while meeting regulatory requirements.
According to the McKinsey survey, 24% of healthcare organizations try to build AI tools internally, often using their IT and clinical informatics teams. For these groups, collaboration happens within the organization with open communication across departments. This helps align AI development with strategy and daily operations. The result is AI applications that reflect the size of the institution, patient groups served, and regulatory context.
Risk management is often a major challenge for scaling generative AI in healthcare. The industry is tightly regulated, especially concerning patient privacy under HIPAA and clinical liabilities. Tools based on AI need careful oversight.
The survey found that 57% of organizations not pursuing generative AI pointed to risk as a key reason. Risks include possible inaccuracies in AI outputs, bias in algorithms, data privacy issues, and compliance with laws. Effective governance frameworks are necessary. These should include risk assessments, auditing, and data security measures.
Cross-functional teams help handle these risks. Clinical experts share knowledge about safety and care standards. IT staff focus on security and system readiness. Compliance officers ensure laws are followed. External vendors bring expertise in AI ethics, algorithm transparency, and safeguards. Together, these teams create a governance approach that is hard to achieve working alone. It also allows organizations to address risks early in development rather than after deployment.
Many U.S. healthcare organizations prefer customized generative AI tools due to the diverse and fragmented nature of the healthcare system. One-size-fits-all solutions often fail. With many EHR systems, payer requirements, and patient care models, AI has to fit into existing workflows.
Collaborating with third-party AI vendors helps make this possible. Vendors can adjust AI algorithms, interfaces, and automation to fit clinical specialties, billing methods, or patient communication styles. For example, a midsize medical practice might work with an AI company to develop a phone automation system that handles appointment requests, insurance checks, and patient questions. This reduces staff workload and improves patient access.
Customized solutions matter especially in complex settings where administrative efficiency cuts costs and improves patient satisfaction. About 60% of healthcare organizations using generative AI report or expect a positive return on investment. Tailored AI increases chances of reaching these results.
Healthcare administration in the U.S. constantly deals with limited staff, rising patient numbers, and heavy documentation demands. Workflow automation through AI tools like robotic process automation (RPA), natural language processing (NLP), and generative AI-powered virtual assistants helps tackle these issues.
One example is front-office phone automation, where AI manages incoming calls, routes patient questions, sets appointments, and provides insurance information automatically. Services like these help reduce staff burnout while maintaining service quality.
Collaboration between administrators, IT staff, and AI vendors is key to integrating AI smoothly with existing workflows and EHR systems. This ensures the technology supports daily operations and meets organizational requirements.
Although early AI uses in healthcare have focused mainly on administrative tasks, interest is growing in using AI for core clinical functions. Leaders expect generative AI to help productivity, patient outcomes, and quality of care in the near future.
Examples of important areas include:
Moving AI into clinical use requires close collaboration between clinicians and AI developers along with regulatory oversight. Cross-functional governance frameworks are important to ensure ethical use.
Many healthcare organizations choose to partner with third-party AI vendors. These vendors bring technical skills and medical practice experience from previous projects. This helps fill gaps in internal capabilities, especially as AI technology changes quickly.
Working with external partners also offers flexibility. Contracts and modular AI tools can be adjusted to meet new regulations or organizational priorities. This flexibility is important in a sector where patient privacy, safety, reimbursement, and technology rules often change.
Organizations developing AI solutions internally must invest heavily in staff training, infrastructure, and compliance processes. Such projects require ongoing teamwork and integration across departments.
U.S. healthcare organizations show strong interest and ongoing investment in generative AI. The technology is expected to become more embedded in healthcare operations. Most groups start with pilot programs and proof-of-concept projects while focusing on risk management and governance to protect patients and meet regulations.
In this setting, cross-functional collaborations and vendor partnerships play a key role. They help ensure AI tools are developed carefully and responsibly. This allows healthcare providers to improve clinical productivity, patient engagement, and administrative work without exposing patients or the organization to unnecessary risk.
For administrators, owners, and IT managers, learning how to build and manage these collaborations will be important for successfully using generative AI technologies and shaping healthcare’s future.
Over 70% of healthcare leaders report that their organizations are pursuing or have implemented generative AI capabilities, indicating a shift towards more active integration of this technology within the sector.
Most organizations are in the proof-of-concept stage, exploring the trade-offs among returns, risks, and strategic priorities before full implementation.
59% are partnering with third-party vendors, while 24% plan to build solutions in-house, suggesting a trend towards customized applications.
Risk concerns dominate, with 57% of respondents citing risks as a primary reason for delaying adoption.
Improvements in clinician productivity, patient engagement, administrative efficiency, and overall care quality are seen as key benefits.
While ROI is critical, most organizations have not yet evaluated it fully; approximately 60% of those who have implemented see or expect a positive ROI.
Major hurdles include risk management, technology readiness, insufficient infrastructure, and the challenge of proving value before further investment.
They allow organizations to leverage external expertise and develop tailored solutions, enhancing the ability to integrate generative AI effectively within existing systems.
Risks like inaccurate outputs and biases are crucial, necessitating strong governance, frameworks, and guardrails to ensure safety and regulatory compliance.
As organizations enhance their risk management and governance capabilities, a broader focus on core clinical applications is expected, ultimately improving patient experiences and care delivery.