In the United States, cancer healthcare centers deal with large amounts of private patient data every day. The use of AI, which depends on huge sets of data for learning and decision-making, raises important questions about data privacy.
AI systems use information like medical histories, gene data, imaging results, and treatment responses. Studies show AI can predict pancreatic cancer risks almost as well as genetic tests by looking at millions of patient records, including disease codes and the timing of events. This shows AI’s abilities but also points to the need to keep patient information safe.
Medical leaders must follow rules like the Health Insurance Portability and Accountability Act (HIPAA), which protects health information privacy and security. Although HIPAA sets rules, AI often needs to share data across different platforms and institutions, which can increase the chance of data leaks or unauthorized access.
Also, AI development often gathers data from many places, sometimes including patient data not specifically collected for AI use. This raises ethical questions about whether patients have agreed to this and about patients’ rights to control their data. Organizations need strong policies to hide patient identities, use encryption, and make clear rules about who can see sensitive data.
Algorithmic bias happens when AI systems give results that favor or hurt certain groups because of problems with the data or design. In cancer care, this can cause unfair diagnosis or treatment suggestions and make existing healthcare differences worse.
Matthew G. Hanna and others pointed out in a 2025 review that AI bias can come from three main areas:
AI has shown good skill in cancer research and care, such as reading thyroid ultrasounds to avoid extra biopsies and spotting cancer cells too small to see easily. Researchers at Penn Medicine made AI tools that quickly analyze lots of imaging data to improve diagnosis accuracy.
But if AI models are biased, patients from underserved groups may get worse diagnoses or treatments.
To lower bias, cancer care centers in the U.S. should use training data that includes many kinds of patients. They also need to check and update AI models often to keep them fair and useful as medical care changes.
A hard problem with using AI in cancer healthcare is deciding who is responsible when AI causes harm. AI is often called a “black box” because it is hard to know how it makes decisions. This makes it tricky to say if AI makers, healthcare workers, or managers are at fault when mistakes happen.
Some ethical issues with AI in healthcare are:
Experts like Dr. Pen Jiang from the National Cancer Institute note that AI helps develop new cancer treatments and tools for patient screening. But ethical problems remain. For example, insurance companies might misuse AI risk predictions to refuse coverage to patients with higher cancer risks, creating social and ethical challenges.
Healthcare leaders in the U.S. need to work with lawyers, ethics boards, and tech partners to make policies that show clear responsibility and build patient trust while using AI well.
Apart from helping doctors make decisions, AI can improve the daily work in cancer care centers. AI can assist with front-office phone calls and answering services to help staff work more efficiently.
AI systems can answer patient questions, set up appointments, send test results, and remind about follow-ups. This helps patients and lets clinical staff spend more time with patients.
In cancer treatment, AI can bring together different data—like electronic medical records, images, and gene information—into one workflow. AI tools can flag suspicious tumors in images for doctors to check first, which can speed up diagnosis. AI can also help change radiation doses or surgery plans in real time to make treatment better.
However, using AI for workflow needs care with data privacy, system reliability, and training staff to use AI. IT leaders and managers in U.S. hospitals must check that AI systems are safe, work well with current tools, and protect against mistakes or bias.
Rules and laws in the U.S. are changing to keep up with AI advances. The Food and Drug Administration (FDA) has made ways to approve AI medical devices and software. But unlike regular devices, AI can keep learning and changing after use, which makes ongoing control harder.
AI used in cancer diagnosis and treatment, like tools for reading images or analyzing gene data, must prove they are safe, work well, and can be trusted before being widely used. The FDA asks for strong clinical testing and risk checks. After AI is in use, it still needs to be watched to catch possible failures that could affect patient safety.
Data privacy laws like HIPAA and new federal talks about AI make following rules more complex. Hospitals and providers must stay updated and change how they use AI as needed.
The rules put a duty on hospitals and doctors to balance new ideas with patient safety and ethics. Teamwork between IT staff, medical leaders, and legal experts is important to handle these rules well.
Research by Matthew G. Hanna points out that fixing bias and ethical problems with AI is not a one-time job. AI systems need ongoing checks to find bias, mistakes, or unexpected effects. Healthcare groups should form committees with data experts, ethicists, doctors, and managers to watch over AI usage.
These groups help keep AI fair, clear, and matching medical goals. AI systems need to be updated with new and varied data to work well for many kinds of patients.
Ethical oversight also means telling patients clearly about how AI is used in their care, including what it can and cannot do. Honest talk helps patients understand and trust the system.
For medical practice leaders, IT managers, and owners in U.S. cancer care facilities, using AI means handling many challenges:
By managing these issues carefully, cancer care centers in the United States can use AI responsibly. This helps improve care and operations while protecting patients’ rights and safety.
Artificial Intelligence has the potential to change cancer research and treatment. But its success in real healthcare depends on dealing well with ethical, privacy, and regulatory issues. Only through careful use and monitoring can AI’s full benefits be reached for patients and providers across the country.
AI enhances cancer research by aggregating vast data, identifying patterns, making predictions, and analyzing information faster and with fewer errors than humans, aiding prevention, diagnosis, and personalized treatment.
AI predicts cancer risk by analyzing large datasets, including disease codes and their timing, to identify high-risk patients earlier and more accurately than traditional methods or genetic testing, potentially overcoming screening barriers.
AI aids diagnosis by analyzing imaging (like ultrasounds and MRIs) to detect tumors with high precision, reducing invasive procedures and supporting radiologists to flag suspicious areas for further examination.
AI personalizes treatment by predicting responses based on genomics data, optimizing radiation dosage, assisting surgeries, and enabling dynamic treatment adjustments, thereby enhancing precision medicine and intervention efficacy.
Challenges include data privacy, security, ethical concerns, potential bias due to human-influenced algorithms, regulatory adaptation, reliability, scalability, and cost, limiting widespread adoption and raising accountability questions.
AI accelerates drug discovery by enhancing understanding of protein structures and mining genetic data to identify drug targets quickly and with more accuracy, facilitating faster and more efficient research pipelines.
Concerns include potential misuse of sensitive health data, insurance discrimination based on AI predictions, algorithmic bias, and uncertainty on legal accountability when AI-driven decisions cause harm.
AI models using large-scale health records have demonstrated accuracy at least comparable to genetic sequencing tests for predicting cancers like pancreatic cancer, often at lower cost and broader applicability.
AI-driven imaging analysis is expected to become widespread, enabling earlier, more accurate tumor detection by uncovering subtle or invisible cancer cells, thereby improving diagnostic speed and outcomes.
CRI supports projects that combine AI with genomics to identify therapeutic gene targets, biomarkers for treatment screening, and AI frameworks to analyze T cell biology, aiming to enhance cell therapies for solid tumors.