Friday, October 25, 2024

Creating liberating content

Legal services are subject...

The report includes a legal consideration regarding the different doctrinal and jurisprudential opinions...

The statute of limitations...

Approach to the problem Law 31/21014 of December 3, 2014, which amends the Capital...

The ‘yes is yes’...

It is clear to no one that the legal reform carried out by...

Suffrage. According to philosophical...

Adolfo Posada's book El Sufragio. Según las teorías filosóficas y las principales legislaciones,...
HomeArtificial IntelligenceThe draft Regulation...

The draft Regulation on Artificial Intelligence

After prior consultation, a White Paper and requests from other institutions such as the Parliament or the Council, the Commission has just exercised its power of legislative initiative and a draft Regulation on “a European approach to artificial intelligence” (hereinafter AI) has been leaked.

The draft defines AI as software which, based on the use of mathematical and programming techniques listed in Annex I (and which are what is vulgarly understood as “algorithms”, although this word has a broader meaning, as I have studied here), produces results that serve to make predictions, recommendations for future decisions, etc.

The starting point of this future regulation is already known: AI is an opportunity that cannot be renounced because it allows efficiency gains and because if we don’t do it, others will, and at the same time it entails significant risks, ranging from the possibility that an “intelligent” system controlling a power plant or a medical treatment machine may start to malfunction and cause serious damage, to people being discriminated against based on the “reports” produced by an intelligent data analysis system, or that omnipresent “Big Brother”-type surveillance systems may be set up. The risk is ultimately the possibility of harm that may affect the integrity or life of individuals, damage to property, serious impacts on society as a whole, or on economic activities of major importance, or distortion in the provision of essential services, or negative impacts on fundamental rights (including the right not to be discriminated against or the right to privacy). The use of AI involves a risk of infringement of legal rights or assets that are already protected at the legal and, in some cases, constitutional level.

This starting point (the need to minimize the risks produced by an activity that cannot or will not be renounced) is the same as in almost any other regulation of risky activities, for example, industrial activities.

From this point, several intervention instruments or techniques can be used: the total or partial prohibition of certain activities to avoid risks (following the precautionary principle), an authorization regime (preventive control), other forms of preventive control (responsible declarations, for example), an exclusively subsequent control (civil and, where appropriate, criminal liability of those who cause damage using this risk-creating technique), all accompanied by an inspection apparatus, which normally acts at the request of the injured parties, and which helps them and the courts to discover and accredit the infringing conduct.

This draft regulation combines these approaches, which are also adapted to the specialty and complexity of the object of regulation.

Prohibited AI applications

From the outset, certain uses of AI are prohibited (Article 4): those that “manipulate” citizens (leading them to engage in conduct to their detriment), those that use personal information to detect the vulnerabilities of subjects and lead them to act to their detriment, those that use personal information obtained from very diverse sources to establish profiles that classify individuals with the result of discriminatory treatment in contexts different from those in which the information was obtained, and indiscriminate and general surveillance systems (“big brother”).

The truth is that the first three are supposed to represent extreme versions of some common uses of AI and, precisely because they are described in such extreme terms and with such an accumulation of adjectives, it is very difficult for them to be applied, so the ban is of little use.

To say that an AI application “manipulates human behavior,” “causing a person to behave, take an opinion or make a decision to his or her own detriment” (Article 4), is to say everything and nothing. Much of advertising could fall under that definition, but at the same time human behavior remains voluntary (that is at least the belief that underpins the entire legal system), so we can hardly say that an application has that “mandatory” force that “forces” citizens to behave in a certain way. For the rest, there is no need to use AI to manipulate or to try to lead people to make decisions that are more favorable to the interest of those who move them in that direction than to the interest of those who make them.

The same can be said of applications that “exploit information or predictions about a person or group of people to attack their weaknesses or special circumstances, leading a person to behave, form an opinion, or make a decision to their own detriment.”

Applications that “classify” people based on data obtained from their behavior or personal characteristics (or predictions about them), and that lead to discriminatory treatment of certain people or groups, in contexts that have nothing to do with the data on which the classification is based, or in a disproportionate manner, is one of the most frequently described scenarios of misuse of AI. For example, a system that calculates the car insurance premium based on different circumstances that can “predict” the greater or lesser probability of a driver causing an accident, and that ends up increasing the premium, for example, for those who have less education, or lack a permanent job, or are forced to travel long distances by car to get to work every day. In any case, the adjectives used to define the offence [“systematic unfavorable treatment”, “negative treatment of certain individuals or entire groups of individuals disproportionate to the seriousness of their social conduct”] suggest that the possibility of using data analysis to enable companies to adapt their advertising efforts, their offers or their contractual conditions to the characteristics of different customers, which is one of the most common uses of AI, is not being completely eliminated.

Although some of these prohibited conducts may be carried out, exceptionally, by a public authority when authorized by a regulation and for public safety purposes, it rather seems that this exception is intended for the other prohibited conduct, i.e., that of generic surveillance, which in certain specific situations may be authorized for security reasons.

AI applications subject to authorization

Remote biometric identification in public places (video surveillance in streets, for example) is subject to administrative authorization, which will only be granted when there is an enabling regulation, for the fight against serious crimes (including terrorism) and subject to limits and guarantees. As is well known, this does not involve the installation of cameras that simply record what happens in the street, and subsequently help to know what happened and to search for possible perpetrators, but systems that compare the images perceived with databases, allowing them to automatically identify a person (the draft does not say that this authorization requirement applies only to applications in which identification is immediate or in real time).

AI applications for which specific rules are laid down.

Article 41 obliges that, when using a chatbot or other automated mechanism that interacts with users, users must be warned that they are not talking or messaging with a real person, but with an application, unless this is obvious in view of the circumstances.

Another important rule is that it obliges the systems known as deep fake, which generate images and/or sound that can deceive and make people believe that they are real images of specific people (for example, famous people or politicians, who can be compromised by videos in which they seem to do or say things totally contrary to their ideas or public image) to warn that it is a fiction, although exceptions are allowed based, for example, on freedom of expression (which can legitimize the use of these techniques in fictional works such as films or series).

Less clear seems to me the obligation to warn of the use of emotion recognition systems based on data, although it seems to refer to cases in which recognition occurs “live”, i.e., based on data obtained at that moment.

High-risk” AI applications and their control mechanisms

But the core of the draft Regulation is not the prohibited applications or those subject to authorization or specific rules, but the regime for “high-risk” applications, which are listed in Annex II and regulated in Articles 5-40.

A first group of high-risk applications (for which, as we shall see, prior verification by an independent third party is required) are those used for biometric identification (video surveillance with identification of subjects) and for the operation of critical infrastructures (systems controlling a power or water supply plant, for example, whose malfunction, or malicious attack, can cause very serious damage).

The other group (which does not require such independent verification, but will be subject to a kind of responsible declaration) includes the typical applications of “predictive” AI, such as those used to determine the admission (or not) of students to educational institutions, the hiring of workers or their promotion within the company, the granting – or denial – of credit, the granting of social benefits (and the monitoring of compliance with their conditions), “predictive policing” and risk assessment used to allocate police resources, or, finally, AI applications intended for use by judges and courts. It can be seen that profiling or the use of AI to determine the contractual conditions or treatment of a subject are not completely prohibited by Article 4, but, except in extreme cases, are simply “high-risk AI applications”.

For these types of applications another form of legal treatment comes into play, which is not the prohibition but the establishment of requirements that they must meet. Unlike what happens in other areas traditionally considered “risky”, such as industry, where regulation ends, at the regulatory level, with the approval of standards that establish specific safety conditions, here the requirements are so general that they are reminiscent of Article 6 of the 1812 Constitution (“Love of the Fatherland is one of the main obligations of all Spaniards, and also to be just and beneficent”). Thus, high-risk applications must be based on “high quality”, “representative, error-free and complete” data (Article 8). The data generated in the creation and use of the application must be documented and archived (Article 9), well-made and reliable (“robustness, accuracy and security”, Article 12), always subject to human control (Article 11, which, among other things, prohibits the system from rejecting human intervention or bypassing the security mechanisms established in the application) and have a “sufficient” degree of transparency (Article 10).

Total transparency is not required, but must be compatible “with compliance with the legal obligations of the user and the provider” (Article 10.1), including, logically, those of respecting the industrial secrets used in the application itself, to which reference is made in Article 62.1. The regulation of transparency is obviously one of the most sensitive issues, and the draft seeks a balance, in which the provider must show how the application works, including its “general logic”, as well as the starting budgets or a description of the data used for its creation, but is not required to be fully transparent about the software used.

These requirements are, in a way, maximum “objectives” to be aimed at, but they can be achieved in many ways and also with different levels of intensity. Think of transparency, robustness or the documentation or archiving of the data generated in the operation of the application: in each specific case it will be necessary to find the way to meet these objectives, and there is no single way of doing so, since, among other things, a maximum, medium or minimum level of quality or security can be aspired to. We can say, for example, that cars must be “safe”, but there is no single way of achieving this and, on the other hand, not all models and brands aspire to the same level of safety. Much remains to be specified.

In high-risk applications, and leaving aside those of biometric identification, critical infrastructure management, as well as those that are installed in products that are subject to safety regulations (industrial machinery, toys, elevators, explosives, etc.), it is the “supplier” or “manufacturer” itself who controls compliance with these requirements in a responsible manner (Article 35). That is why I say that the mechanism is similar to that of a responsible declaration (Annex II, paragraph 3).

This involves applying the technique of compliance or “regulatory compliance”. Each manufacturer or designer will have to establish, in each AI product and in its creation and application process, a series of measures aimed at sufficiently complying with the requirements set forth in Articles 8-12 of the draft Regulation, and document it. These measures will not be the same in all cases, but will have to be proportional to the type of application, its complexity, the damage it may cause, the risk of such damage occurring, etc.

There are some cases in which norms are established that translate general requirements into concrete standards, and the draft refers to them. This happens when the IA is applied in products that have safety standards that also extend to it (e.g. transport vehicles, subject to strict safety regulations) or when the EU approves technical standards on some aspect of the IA.

For those cases in which the project requires conformity to be certified by a third party (which are basically AI applications used in products subject to security regulations, as well as biometric identification and critical infrastructure management and control), verification entities are regulated in terms similar to other sectors (such as auditing or technical inspection of vehicles, to mention just two very different fields): entities that are independent of the companies whose products they verify, and which are subject to administrative regulation, as well as having liability insurance and sufficient technical competence. Their decisions must be subject to appeal (Article 29).

Other provisions

Applications that are not classified as “high-risk” (meaning that they are less likely to cause damage to protected rights or property) may adopt voluntary codes of conduct to comply with the requirements established for high-risk applications.

Member States must establish mechanisms for supervision and also for penalties. Fines will be capped at a maximum of EUR 20 million or 4% of worldwide turnover (if higher), for infringements consisting of the use of prohibited AI applications, the provision of false information to verification entities or failure to comply with the authorities’ requirements.

Issues such as the establishment of sandboxes or controlled testing spaces are also regulated.

What the draft Regulation does and does not regulate

This draft Regulation establishes requirements for the use of AI applications by both public and private operators. These are general and additional requirements, because they do not expressly exist now, which are intended to prevent damage to property and rights that are protected at the highest legal level. In other words, it is not that the Regulation prohibits things that are now permitted, but that it seeks to prevent such harmful or damaging results from occurring.

Compliance with the requirements established by the Regulation (which is a relative compliance, as we are seeing, especially in the case of high-risk applications, because the requirements are rather objective) does not exhaust the legal problems of AI. There remains, from the outset, the subsequent control that we saw at the beginning. If damage occurs despite, for example, the preventive measures specified in the compliance document drawn up by the producer of the IA application or approved by the certifying body, there may be civil or, where appropriate, criminal liability, although it will be necessary to assess to what extent liability is excluded as a result of the application of these measures, which in principle represent diligent action aimed at minimizing risks. This is the same thing that happens with the ITV, which is a system aimed at reducing the risks in the circulation of motor vehicles, but the fact of having passed the ITV does not exclude the occurrence of damages or that the driver and/or owner of the vehicle are responsible for them.

On the other hand, compliance with the requirements set out in the Regulation does not mean that AI applications can simply be used for anything. It is also necessary to comply with data protection regulations and, in addition, it will be necessary to be aware of the regulations applicable, where appropriate, to the specific sector in which the AI is applied. Thus, for example, in its use by Public Administrations it will be necessary to take into account what you want to do with this application: it is not the same to use it to automate processes, in a purely instrumental way (as in the applications that facilitate tax returns, which are legally irrelevant), than to use it as an aid to decide when to start an administrative procedure (one more step) or to determine the content of an administrative resolution, which will normally require a regulatory authorization (I have studied it here) and not just compliance with the general requirements established in this draft Regulation.

Continue reading

Understanding Cargo Ships: Types and Functions

Cargo ships, also referred to as freighters or cargo vessels, play a pivotal role in transporting large volumes of goods from one port to another around the globe. Their function is indispensable in the global supply chain, facilitating the...

Understanding the Implications of Challenging Foundation Board Resolutions

Introduction to Foundation Board Resolution Challenges In legal literature, there's a dearth of studies concerning the contestation of decisions made by foundation boards. Professor La Casa is taking the initiative to address this gap in our legal understanding. Below, I'll...

Cargo Ships: Types and Roles in Global Trade

Understanding Cargo Ships: Their Roles and Various Types Cargo ships, also referred to as freighters or cargo vessels, play a pivotal role in transporting large volumes of goods from one port to another across the globe. Their function is indispensable in...