The use of artificial intelligence in society is a sensitive issue, which has been addressed for some time at European and national level. The analysis of possible risks and benefits associated with the use of AI systems is functional to the identification of regulatory interventions and strategies to be adopted in order to ensure that technological progress takes place in a context of security and protection of rights.
Recently, the National Committee for Bioethics and the National Committee for Biosafety, Biotechnologies and Life Sciences (respectively CNB and CNBBSV, jointly the “Committees”), At the request of the Prime Minister, they studied the topic of AI applied to the medical field.
In the Opinion “Artificial intelligence and medicine: ethical aspects” of May 2020 (the “Opinion”), the Committees highlighted the opportunities and risks associated with the use of AI in the medical field, accompanying this analysis with reflections of an ethical nature. The Opinion talks about a “digital humanism” and the need to tackle these issues with the aim of obtaining “medicine with machines and not machines” and to improve the performance of the National Health Service.
On the other hand, the European Commission has already intervened on the need to adopt an approach to AI that guarantees security and protection of rights with the publication of the “White Paper on artificial intelligence” of February 19, 2020 and the MISE Group of Experts on artificial intelligence, with the “Proposals for an Italian strategy for artificial intelligence”, a strategic plan published on 2 July 2020.
We will examine below the accurate Opinion of the Committees, also in light of the recent developments recorded in the EU and national regulatory context.
Potential and risks of AI in the medical field: reflections from the Opinion
There are many advantages deriving from the use of AI in medicine. Think ofautomated cognitive assistance that the AI is able to provide to the doctor in the diagnostic and therapeutic activity. For example, AI can support the healthcare professional in classifying the patient’s condition and identifying any critical situations for the subject (such as the Dermosafe system).
Furthermore, the Committees highlight how the use of AI tools make it possible to reduce the time necessary for carrying out bureaucratic and routine activities. This allows doctors to invest more time in the care relationship with the patient, strengthening the doctor-patient relationship and dialogue.
At the same time, the opinion focuses on some critical issues related to AI tools. In particular, it is emphasized that automated cognitive assistance risks diminishing human attention with possible reductions in the physician’s abilities. In fact, a possible consequence could consist in the delegation of decision making to technology, with a consequent reduction of the human and professional qualities of the doctor.
Telemedicine: how to activate a valid hospital-doctors-territory network
The main critical issues raised in the Opinion consist of:
- in the reliability of AI tools;
- in the enormous use and possible sharing of data;
- in the opacity of the algorithm underlying the operation of AI (in the sense that the steps through which the algorithm interprets and analyzes the data are not always transparent and could also lead to discriminatory results); is
- in the liability in case of damage caused by the use of AI.
The recommendations presented by the Committees
The Committees presented, through the Opinion, some recommendations aimed at mitigating the risks associated with AI. In particular, to remedy theopacity of algorithms and to ensure the reliability of the AI tools, the Committees have suggested more numerous and accurate controls, also through the validation of algorithms and the certification of new technologies. The Committees also recommended strengthen surveillance and monitoring and to compare, through controlled clinical studies, the results of AI systems with the decisions taken by groups of doctors without the aid of AI tools.
Again with a view to the reliability and safety of AI tools, in document “Proposals for an Italian strategy for artificial intelligence”, the MISE suggested developing, with reference to AI systems and their use, a tool inspired by Data protection impact assessment (data protection impact assessment – DPIA), provided for by art. 35 of Reg. (EU) 2016/679 (GDPR). This tool would oblige every subject involved in the design, production and use of the AI tool to greater responsibility. Specifically, the MISE working group proposed the use of Trustworthy AI Impact Assessment (TAIA) self-assessment grid, presented at European level and being defined. It is a tool of risk assessment with which the actors involved in the process of developing and using the AI tools must identify possible risks and try to mitigate their impact.
As for the doctor-patient relationship and the issuing of a consent that is as aware as possible, the Committees advise doctors to inform patients, correctly and in a simplified and understandable language, on the risks and benefits of applying new AI technologies, where necessary.
Furthermore, the Opinion (as well as the “Proposals for an Italian strategy for artificial intelligence”) proposes to rethinking the training of health professionals with a more interdisciplinary approach, so as to guarantee operators in the sector greater skills in the field of AI. At the same time, the Committees also suggest introducing notions of ethics in the training path of engineers and designers of AI systems.
AI and civil liability
A very complex profile concerns the liability for damage caused by the use of AI. One of the most complex issues concerns the identification of the responsible subject in case of damage to the patient following the use of AI tools. In particular, which of the numerous subjects involved in the supply chain should be responsible (eg designer, software vendor, doctor-user) in the hypothesis, for example, of decisions made through an AI system and in the case of AI machines badly planned and badly used?
According to the Committees, the patient should have the possibility to assert the contractual responsibility towards the doctor and the company structureand also to operate a sort of “social contact responsibility” (category developed by the jurisprudence) towards the other professionals involved and who should have acted according to the diligence required by their profession.
Also in the “Proposals for an Italian strategy for artificial intelligence”, the MISE working group explores this issue. According to these experts, at the moment it is possible to rely on the application of general principles on liability (even non-contractual) already present in our legal system. In particular, this document assumes the possible application of art. 2050 of the Civil Code, relating to the exercise of dangerous activities, in the case of the use of robots and AI systems used in activities involving human beings.
At EU level, the European Commission in the document “Liability for emerging digital technologies ”of April 2018 also theorized the possibility of setting up a sort of public fund for the compensation of damages caused by the use of AI systems (a tool similar to the Guarantee Fund for road victims). However, he did not specify how in hypothesis this fund could function and, above all, by whom it should be established and maintained.
In light of these application difficulties and absence, even at a European level, of specific rules on liability for damage caused by AI systems., the Committees in the Opinion and the MISE working group in the “Proposals for an Italian strategy for artificial intelligence” call for a regulatory update of the civil liability discipline in the application of new AI technologies.
The Opinion highlights the importance of research on AI in the public sphere of the National Health Service and also encourages the formation of a public awareness on the potential and risks associated with new technologies, so as to further raise awareness among citizens and make them more involved in the public debate.
Recently, they have been numerous publications of documents on the strategies to be adopted in the field of artificial intelligence, both at national and community level. It is therefore desirable that the Legislator accepts the appeal of the Committees and updates the existing legislation (especially in the field of responsibility) taking into consideration the specific criticalities and potentialities of AI, without hindering the implementation of these tools within different sectors of society (such as medicine) but at the same time guaranteeing adequate levels of protection of rights (such as the fundamental right to health and the right to privacy on data).
Autonomous & continuos learning: provide your collaborators with the right tool to restart
@ALL RIGHTS RESERVED