Healthcare institutions are changing their roles. They are not just doing research anymore. Now, they also make and use AI and machine learning software for clinical and administrative work. This change brings many challenges. They must keep patients safe, follow laws and ethics, and handle the complex nature of AI.
For healthcare administrators and IT managers, AI systems are no longer just experiments. They are now part of patient care and office work. Brenna Loufek and Mark Lifson from the Mayo Clinic’s Center for Digital Health point out that careful teamwork is needed during this change. Many groups must work together, like Human Research Protection Programs (HRPP), Institutional Review Boards (IRB), developers, doctors, and office staff.
In the United States, regulators are starting to treat AI software as a special type called Software as a Medical Device (SaMD). The Food and Drug Administration (FDA) is making rules to ensure these tools are safe, effective, and ethical. But AI is different; it can learn and change over time. This causes new challenges compared to normal software.
AI governance frameworks have an important role. IBM research shows about 80% of business leaders see explainability, ethics, trust, and bias as major problems with AI adoption. Making AI clear and accountable means not only following current rules but also watching for risks like model drift, where AI’s performance changes after use, which could lead to wrong decisions in care.
To manage these issues, oversight teams need people from different fields. This can include AI developers, risk managers, doctors, legal experts, and compliance officers. Working together helps check AI models continuously, keep ethical standards, and obey privacy laws like HIPAA.
Making AI healthcare software is hard. The first problem is protecting human subjects. Usually, IRBs check research ethics, but AI keeps learning. So IRBs alone are not enough. Healthcare organizations need special oversight programs. These include HRPPs working with tech teams to watch AI for safety and fairness from start to finish.
Another big issue is data quality. AI depends a lot on the data it gets. Bad or biased data can make wrong predictions or treatment advice. The European Union focuses on strong data systems and cybersecurity in healthcare. U.S. providers should also keep data safe and accurate to avoid privacy problems and make sure AI works well.
Also, following rules requires managing risk levels for AI. The EU AI Act, starting August 1, 2024, sorts AI risks as minimal, high, unacceptable, or requiring special transparency. Although this law is for Europe, it helps U.S. healthcare get ready for new rules. In the U.S., medical software must be ready for similar risk checks and more government review.
A standardized framework helps healthcare groups make and use AI that meets quality and safety rules while allowing new ideas. This framework should set steps for building, testing, launching, and watching AI after it goes live. It makes sure AI follows laws and ethics without slowing down progress.
Key parts of such frameworks include:
These parts help practice administrators and IT managers get ready for rules while keeping trust in the AI used in their work.
AI can automate work in healthcare offices. For practice administrators and owners, automating tasks like answering phones and scheduling can reduce staff work and lower mistakes.
Companies like Simbo AI use AI to handle front-office phone work. They fix common issues like high call flows, missed messages, and poor communication with patients. Their AI can answer calls 24/7 using natural language. Patients can make appointments, ask for refills, or get simple info without waiting for a person.
AI working with practice management systems helps by automating routine jobs. Staff can then focus on harder patient needs. This AI use is important for healthcare software rules because it impacts patient contact, satisfaction, and data safety.
Administrators must make sure these AI tools follow privacy laws and avoid bias, like treating some patient groups better than others. They should also check that AI call records are protected and that human help is available if needed. This matches safety and ethical rules.
AI clinical software usually starts in research. But now it is moving into actual clinical and office use. Brenna Loufek and Mark Lifson from the Mayo Clinic say success needs wide oversight beyond just the IRB to keep safety and ethics in focus.
Healthcare leaders should support teamwork that includes many groups:
This teamwork helps healthcare groups deal with regulatory checks and safely use AI tools they trust.
Data quality strongly affects how AI software works. Accurate, fair, and safe data leads to better AI decisions. The European Union’s AI plan sets aside €145.5 million to improve cybersecurity in healthcare. They know this is key to trustworthy AI systems.
In the U.S., medical practice leaders should focus on strong data governance following HIPAA, NIST cybersecurity rules, and best practices. AI tools should have automatic checks to find bias and warnings if results go wrong.
Keeping AI secure from cyber attacks is very important. If patient data is stolen or accessed without permission, it harms privacy and AI function. Healthcare providers must add cybersecurity rules in buying and managing AI tools.
The U.S. does not yet have a full federal AI law like the EU AI Act, but rules are changing fast. Healthcare groups can get ready by:
Using these steps builds strength against new rules and improves the group’s reputation for responsible AI use.
AI governance frameworks help groups manage risks like fairness, privacy, clarity, and bias. IBM created an AI Ethics Board in 2019 to review AI tools and check they meet ethics and company rules.
Healthcare administrators and owners should think about similar checks. This could include regular reviews of AI systems to find bias, reports on AI-related problems, and clear rules on who is responsible for AI results.
Following these principles supports ethical AI use and lowers legal or reputation risks. Being open with patients about how AI is used in their care can also raise trust and acceptance.
For healthcare providers in the U.S., making and using AI healthcare software requires balancing new ideas with following rules. A standardized framework based on risk levels, ethics, data quality, teamwork, and ongoing checks is important.
Using AI in front-office work, like phone automation by companies such as Simbo AI, offers ways to run the office better and keep patients happy. Still, administrators must make sure these tools follow privacy laws and ethical rules.
Working together with HRPPs, IRBs, doctors, tech experts, and legal teams helps make sure AI software is safe and ethical. Getting ready for new rules by using governance practices like those from international standards makes organizations stronger.
Paying attention to these points helps healthcare groups handle the challenges of AI healthcare software. This leads to better patient care, office efficiency, and trust in AI tools.
This careful approach helps healthcare providers in the U.S. meet current and future needs. It makes sure AI is a safe and useful tool for doctors and administrators alike.
The session focuses on how research healthcare organizations are transitioning from purely research roles to becoming manufacturers of AI/ML products, emphasizing the importance of ethical and effective deployment.
Human subject protection is paramount to ensure safety, effectiveness, and ethical considerations are prioritized during AI/ML research and its subsequent application.
IRB efforts alone are insufficient due to the complex challenges posed by AI/ML, necessitating a broader, collaborative approach.
A holistic and collaborative approach utilizing the expertise of the entire HRPP is needed to accelerate the translation of healthcare software into clinical practice.
It ensures alignment with regulatory expectations for quality, safety, ethics, and innovation from research to deployment.
HRPPs/IRBs should identify key individuals across institutions who will contribute to the development and implementation of oversight programs.
The session aims to explain challenges in developing AI software, to develop a regulatory framework, and to identify key stakeholders in oversight.
The findings and outcomes from the session will be disseminated post-conference for broader learning.
Institutions can utilize their current resources, community involvement, and broader HRPP expertise to improve AI implementations.
The IRB plays a crucial initial role in ensuring ethical standards are met in the research phase of AI/ML development.