Boards of directors have an important job in guiding AI projects in organizations. Holly J. Gregory, a partner at Sidley Austin LLP, says boards must treat AI risks like other risks such as money, rules, and ethics. The board’s main tasks include knowing how AI is used now and might be used in the future, managing risks, and making sure the organization follows laws.
In healthcare, where keeping patient information private and clear communication are very important, AI governance must focus on being open, fair, and safe with data. When AI tools like automated phone answering are introduced, boards need to check how these tools affect patients, workers, and the community.
At a recent CEO meeting at Yale University, 42% of business leaders said they worry about big problems caused by AI. This shows boards should spend time on checking AI risks, especially in healthcare, where mistakes or bias can cause serious problems.
Boards should keep updated on AI projects and set up groups to check AI risks and rule-following. The U.S. National Institute of Standards and Technology (NIST) created a guide called the AI Risk Management Framework (AI RMF 1.0). This guide helps groups build trustworthy AI that is responsible, clear, and safe. Healthcare groups using AI can follow this guide to better handle AI risks.
Operational risks are problems that might happen during daily healthcare work when AI is added. In clinics, AI mistakes might include errors in scheduling patients, misdirected phone calls, or bias in clinical tools.
One big issue is how AI systems use data to learn. Boards must make sure the data matches the mix of patients served. If AI is biased, it can cause unfair treatment or access problems. This is not only wrong but could also lead to legal trouble. Boards should ask AI companies to be clear about where their data comes from and how good it is.
Cybersecurity is another risk. AI systems manage patient information that HIPAA rules protect. A security breach could cause big fines and harm the organization’s reputation. Boards must oversee strong security rules, employee training, and regular checks of AI systems.
Also, AI is used in deciding on hiring or managing staff. The Equal Employment Opportunity Commission (EEOC) says AI must follow civil rights laws to avoid discrimination. Medical offices using AI for hiring must check for bias, like under New York City’s Local Law 144 starting July 2023. Boards need to make sure employees are trained and policies are in place to meet these rules.
Financial care is a key board duty. When buying AI tools, boards should compare costs with possible benefits. AI might make work easier and improve patient experience. For example, AI phone answering can lower wait times and let staff focus on harder tasks. Yet, AI projects can be costly at first and need money for upkeep, updates, and rule-following.
Boards should ask management for clear measures of AI project value. These could include cutting costs, better call handling, higher patient satisfaction, and fewer mistakes. Linking money spent to real results helps boards hold leaders responsible.
There are also money risks if AI use breaks rules or leads to data leaks. Fines and lawsuits can be costly. Boards must check insurance plans for AI risks and plan for protection if problems happen.
Hiring outside experts to review AI projects is a good idea. Boards might also work with AI experts to help, especially in smaller healthcare places with fewer IT staff.
Reputation is very important for healthcare groups. It affects patient trust, hiring healthcare workers, and referrals. Unlike in some businesses, patients expect privacy, accuracy, and care. AI risks to reputation come from different areas.
First, if AI decisions are biased, patients and the public might lose trust. For example, if an AI phone system does not understand certain accents or languages, some patients may feel ignored. Being clear about how AI works can help ease worries.
Second, data breaches or privacy problems can quickly hurt reputation. Boards must make sure AI follows HIPAA and other data laws and uses strong cybersecurity.
Third, AI’s energy use is also a concern. Training big AI models uses a lot of power, which can affect the environment. Medical offices can think about this when choosing AI vendors or cloud versus on-site solutions.
Boards should include AI risks in their overall risk plans. Steve Cobb from SecurityScorecard suggests using AI to watch public opinions and find problems early. Showing people that AI is handled carefully helps keep trust.
Having a plan to respond to AI problems is important too. This plan should say who does what and how to communicate quickly if AI problems affect the public.
A key focus is how AI automation can improve healthcare work without causing new risks. Front-office phone automation is one example. Simbo AI makes AI answering tools just for healthcare providers. These tools handle appointment calls, patient questions, and routine talks using natural language AI. This lowers staff work and helps answer patients faster.
Boards must know both benefits and limits of these systems. Benefits include better efficiency, fewer missed calls, and steady patient experiences. This can lead to happier patients and better operations.
Boards should also check that AI workflows are watched and updated often. AI needs regular training with varied, correct data to avoid mistakes. Boards must make sure IT teams review AI answers often for bias or errors.
AI automation should not replace human care, especially in sensitive healthcare talks. Boards might want to keep processes where difficult or emotional calls go to human staff quickly.
Automation can also help meet healthcare rules. For example, saved call logs can prove that patient talks follow legal and ethical rules.
Boards must handle changing rules about AI. In the U.S., key rules include NIST’s AI RMF 1.0, which promotes responsible AI use with good governance. The Equal Employment Opportunity Commission works to stop AI discrimination in hiring.
Healthcare groups must also know the views of federal agencies. The CFPB, DOJ, FTC, FDA, and SEC have all offered guidance on AI use, saying it must follow current laws while allowing innovation.
At the state level, laws like New York City’s Local Law 144 require yearly audits of automated hiring tools. This shows how important it is to be clear and reduce bias.
International rules also affect the U.S., especially for groups working with Europe. The EU Artificial Intelligence Act, passed in June 2023, sets strict rules for high-risk AI uses, including healthcare. Boards should keep up with these rules to avoid surprise problems with compliance.
The UK’s principles-based AI rules focus on safety, openness, and fairness and can give good tips even for U.S. healthcare groups.
Secure Expertise: Add board members or advisors who know about AI and technology to help understand risks and opportunities.
Establish Governance Committees: Create board groups or subgroups to focus on AI risks, rules, and ethical use.
Demand Clear Reporting: Ask for regular updates on AI project results, risk checks, and rule-following.
Implement Risk Identification Protocols: Use guides like NIST AI RMF 1.0 to spot and reduce risks.
Monitor Vendor Relationships: Watch outside AI providers for trustworthiness, security, and data use, with agreements that allow audits.
Promote Workforce Training: Make sure all staff, including IT and clinicians, learn about AI limits and ethics.
Develop Crisis Plans: Prepare plans to respond to AI risks quickly to keep operations running and public trust strong.
In conclusion, healthcare boards in the U.S. need to combine strategic thinking with practical actions when managing AI. By learning how AI affects daily operations, finances, and reputation, and setting clear rules and oversight, boards can help their organizations gain from AI while protecting patients and trust. AI tools like those from Simbo AI can be useful if boards manage them carefully within these rules.
The board is responsible for oversight of AI-related corporate strategy, risk management, legal compliance, ethics, and enterprise risk management. They must understand AI’s impact on business, ensure fiduciary responsibilities, monitor policies, internal controls, and assess opportunities and risks associated with AI use.
Boards need to assess AI’s current uses, strategic opportunities, and potential risks, including operational, financial, compliance, and reputational risks. They should explore how AI disrupts industries, supports innovation, and requires changes to business models to capture competitive advantages while managing associated risks.
Risks include biases in data and algorithms, privacy violations, cybersecurity breaches, and societal harms. Boards must ensure data quality, algorithm transparency, and compliance with emerging AI regulations to mitigate errors and unintended negative outcomes.
AI automates routine and skilled tasks, affecting workforce training and productivity. Boards should oversee the ethical use of AI in employment decisions, ensure bias audits, compliance with anti-discrimination laws, and promote employee training for responsible AI use.
AI compliance challenges include preventing bias, ensuring transparency and explainability, data privacy, intellectual property protection, and adherence to evolving legal and regulatory frameworks across federal, state, and international jurisdictions.
Key frameworks include the US National Artificial Intelligence Initiative Act, the NIST AI Risk Management Framework, EU AI Act, UK AI white papers, and various state and federal regulatory guidance that collectively shape AI risk management, transparency, accountability, and fair use.
Boards should establish clear accountability structures, implement policies for risk identification and mitigation, engage diverse teams, monitor third-party AI software risks, and promote a culture of transparency and responsible AI use aligned with corporate values.
Boards should secure regular AI-related updates, define metrics for evaluating AI projects, engage management on AI expertise and resilience, address cybersecurity and data management, and ensure AI compliance risks are mapped to relevant board committees.
AI influences customer interactions, supply chains, and may involve extensive data gathering raising privacy issues. Boards must consider potential biases, misinformation risks, environmental impacts of AI processes, and the ethical implications for stakeholders.
Evaluating AI’s impact helps prevent misuse, unintended harm, discrimination, and violations of privacy or human rights. Boards must oversee policies that mitigate adverse effects on employees, customers, stakeholders, and the environment while ensuring regulatory compliance and ethical integrity.