However, this idea fails to take into account the complex workings of artificial intelligence, which relies on statistical links far removed from human reasoning. learning AIs are often unable to explain how they arrive at their results. This lack of transparency, or “explainability”, turns them into veritable “black boxes”.
AI to revolutionize healthcare
The future of AI in healthcare aims to become an increasingly complex tool, harnessing ever more data. It will no longer be just a matter of making a diagnosis for a pathology, but of carrying out truly comprehensive assessments, combining imaging, medical biology, and no doubt connected health objects (such as ECG via the Apple Watch). Jean-Emmanuel Bibault, oncologist at Georges Pompidou Hospital, predicts that, very soon, medicine will be unable to understand the diagnoses provided by AI.
Some AIs are already capable of detecting breast or pancreatic cancers several years before they are likely to appear. Imagine the reaction of a patient who is told that an AI estimates that he or she has an 85% chance of developing a fatal cancer within two years, without medicine being able to explain how the AI arrived at this diagnosis. Just as mechanics plug in a computer to diagnose faults, doctors risk losing their central role in diagnosis. This is inevitable, as AI is already better at it. As Jean-Emmanuel
Bibault points out, AI diagnoses from clinical pictures with a success rate of 87%, while doctors achieve only 65%.
One specialized AI, DrOracle, even scored 97/100 on the US medical school exit exam (compared with 75 for ChatGPT-4). An impressive score, all the more so as it only takes around 70% to pass this exam.
Efforts are underway to improve AI transparency. Researchers are working on explainability techniques, aimed at making AI decisions more comprehensible to humans. These approaches often combine deep learning with expert systems, the latter operating on rules of causality defined by science. However, these solutions often constrain the potential of AI.
In the medical field, AI will be rigorously controlled by research and doctors themselves. But what about banks refusing a loan, recruiters dismissing a candidate, or schools rejecting an enrolment? Will they go to such lengths to control their AIs? In a context of permanent quest for productivity, nothing is less certain.
Our rights in the face of AI
Following the application of the RGPD (General Data Protection Regulation), a European regulation aimed at protecting the personal data of EU citizens that came into force in May 2018, the president of Italy’s main employers’ union had ironically stated “America innovates, China copies, Europe regulates”.
Under the impetus of its European Commissioner Thierry Breton (former Minister of the Economy under Jacques Chirac) the EU has further illustrated this new adage by being the first to regulate AI, demonstrating a certain reactivity and understanding of the stakes of AI in the future.
The IA Act to manage risks
The AI Act, which came into force in August 2024, defines 5 levels of risk: minimal, limited, general use, high and unacceptable. Minimal-risk AI includes technologies such as spam filters, voice assistants like Alexa and Siri, product recommendations, and machine translation. These tools are considered low-risk, and are not subject to any particular regulatory requirements.
However, AIs classified as limited, such as chatbots, content filters on social networks and content recommendations (Netflix, press…), must now be transparent about how they operate and how the data they process is used.
General-purpose AIs, which include advanced virtual assistants such as ChatGPT and predictive analytics platforms, are subject to more stringent requirements. These systems must implement rigorous risk management throughout their lifecycle, guarantee the quality and representativeness of the data used, and provide detailed technical documentation. Transparency is fundamental: users need to know that they are interacting with AI. In addition, human oversight must be built in to enable appropriate supervision, while levels of accuracy, robustness and cybersecurity must be maintained at a high level to avoid errors and hacking.
High-risk AIs, used in sensitive sectors such as healthcare, education, recruitment, critical infrastructure management (power…), law enforcement and justice, are subject to similar but even stricter obligations. These systems must comply with rigorous standards to guarantee their security and fairness. Facial recognition for surveillance also falls into this category, underlining the need to regulate potentially intrusive technologies.
Finally, the unacceptable risk level prohibits AIs involved in subliminal manipulation (advertising, social networks, games…), social rating, and real-time biometric surveillance (facial recognition, but possibly also tattooing), with a few exceptions, such as investigations into kidnappings or terrorist threats. These restrictions are designed to prevent unregulated mass surveillance and protect individual freedoms.
Regulatory challenges in Europe
This legislation is part of a growing global trend to regulate emerging technologies. In the USA, debates on AI regulation are gaining momentum, but the country is taking a more innovation-led approach.
The AI Act could well become a model for other parts of the world seeking to regulate AI in a balanced way. It’s an important first step, but it can’t answer all the questions posed by the rise of artificial intelligence.
The speed of technological advances calls for adaptable and evolving regulation. This is because the AI Act leads to legal instability, which could slow down the development of AI on the continent. Apple has delayed the European launch of its Apple Intelligence product, and Meta has postponed the release of the latest version of its open-source LLama model.
Positive spirits see this as an opportunity for European companies like Mistral AI. Nevertheless, the question remains: will they be able to keep up with the pace of innovation while complying with strict rules that their foreign competitors are not obliged to follow?
The answer to this question may well determine the future of AI in Europe.