Key considerations for regulating artificial intelligence in healthcare
The World Health Organisation (WHO) has released a new publication listing key regulatory considerations for artificial intelligence (AI) in health. The publication emphasises the importance of establishing AI systems' safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers, and patients.
WHO recognises the potential of AI in enhancing health outcomes by strengthening clinical trials, improving medical diagnosis, treatment, self-care, and person-centred care, and supplementing health care professionals' knowledge, skills, and competencies. For example, AI could be beneficial in settings with a lack of medical specialists, e.g., in interpreting retinal scans and radiology images, among many others.
However, AI technologies, including large language models, are being rapidly deployed, sometimes without a full understanding of how they may perform, which could either benefit or harm end-users, including healthcare professionals and patients. When using health data, AI systems could have access to sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity, which this publication aims to help set up and maintain.
In response to growing country needs to responsibly manage the rapid rise of AI health technologies, the publication outlines six areas for regulation of AI for health.
• To foster trust, the publication stresses the importance of transparency and documentation, such as documenting the entire product lifecycle and tracking development processes.
• For risk management, issues like 'intended use', 'continuous learning', human interventions, training models, and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
• Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
• A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
• Understanding the scope of jurisdiction and consent requirements in service of privacy and data protection is key to addressing the difficulties presented by significant, complex regulations like the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States of America.
• Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners can help ensure products and services stay compliant with regulation throughout their lifecycles.
AI systems are complex and depend not only on the code they are built with but also on the data they are trained on, which comes from clinical settings and user interactions, for example. Better regulation can help manage the risks of AI amplifying biases in training data.
For example, it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies, or even failure. To help mitigate these risks, regulations can be used to ensure that the attributes—such as gender, race, and ethnicity—of the people featured in the training data are reported and datasets are intentionally made representative.
The new WHO publication aims to outline key principles that governments and regulatory authorities can follow to develop new guidance or adapt existing guidance on AI at national or regional levels.
Source: World Health Organisation
Comments