mars 16 2026

Brazilian CFM Issues Resolution on the Use of Artificial Intelligence in Medicine

Share

The Brazilian Federal Council of Medicine (“CFM”) issued, on February 27, 2026, Resolution No. 2,454/2026 (the “Resolution”), addressing the use of artificial intelligence (“AI”) in medicine. The Resolution establishes parameters for the use of AI models, systems and applications by physicians and medical institutions, which must be implemented in accordance with standards of auditing, monitoring, governance, training and transparency.

Among the main developments, the Resolution expressly allows the use of AI as a support tool for medical practice, clinical decision-making, healthcare management, scientific research and continuing medical education, while preserving professional autonomy and patients’ right to information. The Resolution also imposes mandatory human supervision and patients’ right of refusal.

The new rules will enter into force on August 10, 2026, 180 days after their publication. Physicians and medical institutions, including hospitals, clinics, and healthcare centers, should consider these provisions in order to comply with the applicable regulatory framework and avoid regulatory sanctions.

I. Regulatory Framework for Medical Institutions

The Resolution introduces stricter regulatory requirements and governance obligations for medical institutions. One of the main measures is the prohibition to establish targets or policies that subordinate physicians’ professional conduct. Another relevant aspect is transparency, which will be assessed through scientific indicators and accessible reports containing clear and plain-language information, ensuring that patients, physicians, and managers interact with AI responsibly.

Medical institutions must comply with a number of obligations, including:

  • Implementing continuous auditing and monitoring mechanisms;
  • Establishing an AI and Telemedicine Committee to ensure the ethical use of AI systems;
  • Prioritizing the cooperative development of AI models, systems and applications, promoting interoperability and the dissemination of technologies, codes, databases, and best practices with other medical-sector entities, without prejudice to confidentiality obligations; and
  • Conducting a preliminary risk assessment considering, among other factors, potential impacts on patients, the level of human intervention and the criticality of the use context.

Under the Resolution, risk levels must be communicated to patients, and are classified as low, medium and high:

Risk Level Definition Example
Low-risk solutions
  • Minimal or no impact on fundamental rights or on the safety of patients and healthcare professionals.
  • No direct influence on diagnoses or individual treatments, typically performing administrative, operational or low-impact support functions.
Automated scheduling systems, informational chatbots and supply logistics.
Medium-risk solutions
  • Potential adverse impact, but one that can be mitigated through active human supervision and appropriate security controls.
Systems that support important clinical or operational decisions, but do not execute them autonomously.
High-risk solutions
  • High potential for physical, psychological, or non-pecuniary damages to individuals, or significant impacts on public health if the system operates improperly or without control.
  • Require strict validation processes, regular audits and continuous monitoring.
Systems that directly influence critical medical decisions or perform automated actions with significant clinical consequences, especially when involving vulnerable patients or life-and-death situations.

Although the Resolution refers to an “unacceptable risk” category, it does not provide a detailed or explicit definition of what characterizes such classification.

II. PHYSICIAN-PATIENT RELATIONSHIP

The Resolution emphasizes the protection of physicians’ autonomy in relation to AI technologies. According to the regulation, physicians have the following rights:

  • Right to use AI: Physicians may use AI tools as professional support instruments;
  • Right of refusal: Physicians may refuse to use AI systems that lack regulatory certification or scientific validation, or that violate medical principles;
  • Right to information: Physicians must have access to clear, transparent and understandable information about the AI systems used; and
  • Right to autonomy: Physicians are not required to follow AI-generated recommendations.

At the same time, physicians must:

  • Exercise critical judgment regarding information and recommendations generated by AI systems;
  • Use only systems that ensure minimum information security standards compatible with the protection of sensitive personal data in Brazil;
  • Remain updated regarding AI systems applied to medicine, including their functioning, purposes, limitations, risks and levels of scientific evidence;
  • Inform patients whenever AI is used to support diagnosis, care or treatment, and record this information in the patient’s medical record; and
  • Respect patients’ informed refusal, safeguarding the integrity of the physician-patient relationship, clinical listening, empathy, confidentiality and respect for human dignity.

Regarding medical liability, the Resolution clarifies that physicians remain fully responsible for their professional acts performed with the support of AI. However, liability may be excluded in cases of failures exclusively attributable to AI systems; provided that the physician demonstrates diligent, critical and ethical use of such tools. It is expressly prohibited to delegate to AI the communication to the patient of diagnoses, prognoses or therapeutic decisions.

III. Personal Data Protection

Personal data of patients used in the development, training, validation and implementation of AI systems must strictly comply with the Brazilian General Data Protection Law (“LGPD”) and with healthcare information security standards. Institutions must implement security measures capable of protecting data against risks of destruction, loss, alteration, leaks or unauthorized access.

The Resolution adopts the “privacy by design” principle, according to which privacy policies must be incorporated throughout the entire lifecycle of AI systems—from development to updates and retraining—while also observing ethical and scientific principles. In this context, technical and administrative security measures must be implemented in accordance with the state-of-the-art and the criticality of the data and systems involved.

Additional obligations reinforce the physician’s duty of confidentiality, and regulatory sanctions may be imposed in cases such as:

  • Failure to safeguard the confidentiality, integrity and security of health data used by AI systems;
  • Failure to ensure the proper processing of patient data—especially sensitive data—with regard to the purposes of processing as communicated to the data subjects;
  • Failure to notify competent authorities of suspected failures, significant risks or improper uses of AI that may compromise patients or medical care; or
  • Use of AI technologies that do not ensure adequate information security standards.

Conclusion

The Resolution issued by the CFM represents an important regulatory milestone in the fields of Data Privacy, Artificial Intelligence and Bioethics, recognizing technological innovation while preserving medical best practices and the dignity of the human person in healthcare. Accordingly, for professionals and institutions adopting AI technologies, regulatory legal planning is strongly recommended to ensure compliance and mitigate potential risk of enforcement by authorities such as the Brazilian Data Protection Authority (ANPD) and the Federal Council of Medicine.

View the full text of the Resolution.

*This content was produced with the participation of law clerk Ana Loiola.

Compétences et Secteurs liés

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe