Article  | 

Europe sets strict limits on the use of artificial intelligence


Reading Time: 7 minutes

The European Commission has presented a comprehensive legislative proposal that sets strict limits on the use of artificial intelligence in Europe for the most risky programs and applications, in order to protect the fundamental rights of European citizens. At the same time, they want to give companies maximum freedom to promote innovation and confidence in technology, in a delicate balance that will undoubtedly create controversy, especially among defenders of large US technology companies with office in Brussels.

The European Commission wants artificial intelligence to be a “force for progress”, which necessarily requires that it enjoy the trust of all and, at the same time, that the risks associated with its use be mitigated to achieve this trust, said Margrethe Vestager, its executive vice president, when presenting last week a wide legislative proposal on the use of artificial intelligence in the European Union, a world first, and that he wants to be approved and put into operation as soon as possible.

The objective of the Community legislative proposal is to strike a balance between the protection of fundamental rights and technological innovation, in order to create a favorable framework in the European Union for the development of the technologies of the future. And, as Thierry Breton, Commissioner for the Internal Market, summed up, “artificial intelligence offers immense potential but also presents a certain number of risks”, in his description of the legislative proposal together with Community Vice President Vestager, as they have done in previous occasions to publicize important Community legislative initiatives on digital technology and its social and economic implications.

The aim of the proposal is to strike a balance between the protection of fundamental rights and technological innovation, to create a favorable framework in the EU for the development of the technologies of the future

The Commission wants to allow the private and public sector to develop artificial intelligence in Europe in a calm way, based on establishing a climate of trust, but with certain limits. “Our proposal does not focus on artificial intelligence technology itself but on how it is used and for what,” said Vestager, “and is based on taking a risk proportional to the benefits that are to be achieved.” The logic behind the proposal is simple: “The greater the risk that a specific artificial intelligence can cause to our lives, the stricter the rules,” she declared.

Pyramid with four levels of risk

In line with this logic, the proposed legislation classifies the use of artificial intelligence into four different categories. As if it were a pyramid, most of the applications will be found at the base, which represent no risk or minimal risk. For example, Vestager said, filters that recognize messages that are spam and block them or, in a company, those that minimize data that goes to the trash to optimize resources. In this type of application there will be no restriction.

Above this basic level will be artificial intelligence uses and applications with limited risk, such as those that help us find a book or find the closest store we want. In this case, the vice president indicated, its use will be allowed, but subject to transparency obligations. The objective is that users can interact with the machines without danger.

At the top third level of the pyramid will be the “high risk” uses of artificial intelligence, which is the main focus of the regulation. They are uses, Vestager said, that interfere with important aspects of our lives and hence the high-risk consideration. Typical examples are applications that screen a candidate based on her educational resumes when it comes to finding a job. Or the systems that score someone to get a loan from a bank. Software used in autonomous cars or medical devices will also be considered high risk, because it can affect our safety or health, added the vice president.

Systems considered “high risk” will be subject to a series of five strict obligations because they can potentially have a strong impact on our lives, Vestager stressed when presenting the proposed artificial intelligence regulation on April 21 in Brussels, some days ahead of schedule because the information had been leaked. AI system providers will be required to have high-quality data to be sure there is no discrimination or bias. Also, that detailed information is provided on how the system works, so that the authorities can verify its correct operation. Third, that users are provided with substantial information so that they can use the system correctly. Fourth, that there is an appropriate level of human vigilance, both in design and operation and, finally, that the highest standards of cybersecurity and reliability are respected.

At the top of the pyramid you will find those uses that will be totally prohibited, simply because they are considered unacceptable. They will be those systems that use subliminal techniques to hurt or harm someone, such as a toy that manipulates children with a voice assistant to do dangerous things. Apps that rate people based on their social behavior will also be prohibited, because it could influence the authorities in their interaction or how a bank handles a credit application. Only proposals that can be improved make sense.

Clarity and fine for non-compliance

According to Thierry Breton, the regulations that are finally approved by the European Parliament and the Member States should provide sufficient clarity so that companies know what to expect and at the same time attract companies with the largest amount of data to the European continent. industrialists of the world. A dual purpose that is difficult to achieve, because the very existence of the regulation already supposes a brake on the development of applications, rather than a call effect.

The first reactions have already been noted. For the promoters of technological innovation, the effort of the European Commission to make the European Union a world leader in artificial intelligence is inconsistent with the creation of a regulation that wants to regulate it excessively. On the other hand, staunch defenders of fundamental rights fear that individual and collective freedoms will ultimately be violated for the sake of technological innovation.

The national authorities, according to the proposed text, will be responsible for ensuring that the artificial intelligence systems comply with the obligations specified in the regulation, each within its competence framework, which will also be the obligation of each Member State to identify the most suitable for each case. For example, Vestager pointed out, if it comes to privacy issues, the data protection agencies of each member country will be in charge, while the market surveillance bodies will have to determine which products are safe or not. In the event of recurring non-compliance, a fine of up to six percent of the annual turnover and up to 20 million euros may be applied.

“For Europe to be a global leader in trustworthy artificial intelligence, we need companies to be able to build advanced systems under the best conditions,” says the Executive Vice President of the Commission.

There are issues, Vestager acknowledged, that can be problematic, such as remote biometric identification. In the proposal made, the focus has been on remote biometric identification made to several individuals simultaneously, where it is not accepted or severely limited, even if the competent authorities do so. At a border, for example, there is no problem, because it is understood that the border authorities have the right to do them without problems, as happens when fingerprints or facial recognition are requested.

The proposed regulation requires that remote biometric identification be prohibited in public places in real time, because it considers that there is no space for mass surveillance in our society. There are a series of exceptions to this rule, defined in a very strict way, which will in any case be limited and regulated. An example of an accepted permit is when it comes to searching for a missing minor.

Trust in the system is essential

For the regulation to work and to develop the appropriate applications of artificial intelligence, it is necessary that sufficient confidence is generated in the legal system to be adopted and that companies and citizens also have confidence that the applications of artificial intelligence will be beneficial to them.

Furthermore, as the digital future of Europe has been defined, an ecosystem of trust must go hand in hand with an ecosystem of excellence. “For Europe to be a global leader in trusted artificial intelligence, we need to give companies access to the best conditions to build advanced artificial intelligence systems,” said Vestager.

This is the idea behind our revised and coordinated plan on artificial intelligence, she added. Investments within Member States need to be coordinated to ensure that money from programs such as Digital Europe and Horizon Europe is invested where it is most needed. For example, Vestager said, in the high-performance computing program or in the creation of test centers and improvement of artificial intelligence systems.

It is also important to identify high-impact sectors within the European Union to accelerate the development of critical artificial intelligence programs, such as in the field of intelligent agriculture, where better and more sustainable crops are achieved or harvested on the spot thanks to sensors more suitable.

Several years to develop the proposal

This draft regulation on artificial intelligence began several years ago and is the fruit of numerous specialists on the subject. In 2018, the European Strategy on Artificial Intelligence was published, after extensive consultations, and in 2019 Action Guides for trusted artificial intelligence were developed by a group of experts, while in December 2018 a Plan was published coordinated on artificial intelligence.

In 2020 a White Paper was published, with the central idea of ​​having an ecosystem of excellence and trust, which is at the core of the current proposal. The public consultation of this White Paper was accompanied by a report on the safety and implications of artificial intelligence, IoT and robotics, which have now been taken into account in the proposal to Parliament and the Council on machinery products

The Commission’s proposal on the Regulation of artificial intelligence is very long, 108 pages, plus an annex. Then there is the review of the 2021 Coordinated Artificial Intelligence Plan. The Commission’s statement synthesizes the proposals submitted to Parliament, with links to the relevant documents.

The subject of artificial intelligence, and especially powerful facial recognition tools, are being highly debated in the United States lately. Last week, a bill to limit electronic surveillance used by federal investigators and local law enforcement, dubbed “The Fourth Amendment is not a law to the contrary, was presented to Congress, with the support of both Republicans and Democrats. sale”. Instead, the Chinese government is making extensive use of facial recognition techniques and even making extensive propaganda of the progress made, apparently with real-time recognition even in large crowds of people, such as crowded train stations or demonstrations.

In the regulations proposed by the European Commission, facial recognition and massive biometric identification are expressly prohibited, especially in real time and with the exception of highly justified cases, even by the authorities. It remains to be seen how the final legislative text will be drafted once it is approved in this case and also in the many less striking but equally conflictive aspects that the draft bill addresses. Several draft laws related to digital technology are accumulating in Parliament that must give a letter of nature to the European Union and to the defense of the fundamental rights of its citizens.