Friday, October 25, 2024

Creating liberating content

Legal services are subject...

The report includes a legal consideration regarding the different doctrinal and jurisprudential opinions...

The statute of limitations...

Approach to the problem Law 31/21014 of December 3, 2014, which amends the Capital...

The ‘yes is yes’...

It is clear to no one that the legal reform carried out by...

Suffrage. According to philosophical...

Adolfo Posada's book El Sufragio. Según las teorías filosóficas y las principales legislaciones,...
HomeArtificial IntelligenceAutonomous artificial intelligence...

Autonomous artificial intelligence and criminal liability of legal persons

With the EU High Level Expert Group on Artificial Intelligence, established in 2018, we can characterise artificial intelligence systems as software (and possibly also hardware) systems designed by humans that, faced with a complex goal, act in the physical or digital dimension by perceiving their environment, through the acquisition and interpretation of structured or unstructured data, perform knowledge reasoning, process the information derived from the data and decide on the best actions to achieve the set goal. The Organisation for Economic Co-operation and Development will say that an artificial intelligence system is a machine-based system that can, for a set of human-defined objectives, make predictions, recommendations or decisions influenced by virtual or real environments.

It is interesting to refer here to the distinction between predetermined artificial agents with automated behaviour, on the one hand, and artificial agents capable of learning and therefore at least relatively autonomous, on the other hand, and we can still differentiate, as Salvadori indicates, up to four levels between automation and autonomy of artificial agents: those that operate automatically and can be supervised and intervened at all times by human action (human in the loop); those that operate by means of deterministic algorithms; the semi-autonomous, which incorporate automatic learning algorithms, although they can be supervised and corrected by the human subject (human on the loop); and, finally, those that incorporate multi-agent systems to operate with other agents and learn automatically, thus developing great autonomy (human out of the loop). Even with such an elementary classification at the computer level.

As has been seen in other areas (use of dangerous machinery, use of weapons, etc.), it can be said that the consensus is that automatic artificial agents, under the control of the human element, generate responsibility in natural persons, and that autonomous or semi-autonomous artificial agents generate a certain, if not a responsibility gap, then a complicated legal (and criminal) treatment of the same.

Regarding the legal response that the harmful results produced by artificial agents with a certain degree of autonomy deserve, a group of proposals, possibilists, connect the use of AI with human liability, through fault in vigilando, the generation of foreseeable risks, etc., and, ultimately, with the possibility of the use of artificial agents to generate foreseeable risks, and, in short, with the possibility of the sanction of commission by omission allowed by art. 11 PC; others, humanising AI directly, turn to a specific liability of the artificial agent which clashes, however (and not only), with the issue of the legal consequences that can be imposed on the “robot”.

In any case, the context of use, today, of AI with machine learning and in a substantial way in its business strategies (towards specific efficiency objectives) is that of a large company with the capacity to compose instructions on large databases that generate learning in the artificial agent to the point of making decisions that are not controllable, at least directly, by the human element. This would be, from a criminological perspective, the scenario that compounds the doubt about a possible responsibility gap most in need of clarification.

How can we classify the relevance of the decisional processes of AI systems in these large companies? Firstly, one can think of the assistance of artificial agents in human decision-making (individually or collegially), but one can also think of the projection of the autonomy of some AI systems in two other senses: a) as decision processors that are integrated collegially at the same level as human decisions (AI as a Corporate Board member) and b) as decision processors not directly supervised by humans, which are executed by delegation according to internal company rules.

This last assumption becomes relevant, detached from any humanising trait: an artificial agent is enabled to execute its own decisions without specific supervision of human agents and without automation features, i.e., we are dealing with a relatively autonomous AI system.

Let us imagine (or simply think of), for example:

1º A large company that offers music streaming services and has an artificial intelligence system analyse customer data with the aim of offering additional (premium) services under a generic efficiency objective: the more contracts it gets, the better. The algorithm puts together a wide variety of data, data relations that generate automatic learning that updates its action patterns and is no longer traceable from a human perspective: detected interests, browsing trends in certain space-time bands, consumption and service contracting habits, previous litigation with the company, etc. Based on this data, the artificial agent decides to launch an offer to contract additional services in the contractual format, at the time and location of the device where it concludes that acceptance is most likely. A review according to plural rationality parameters considered by humans would point to the fact that the contract launched, with conditions that are difficult to see (within the limits of what is allowed by civil law and jurisprudence), is accepted with the default ‘login’ of the owner (for the routine use of the services already contracted) by a minor with whom he lives, who is the one who usually uses the common computer at that time and from that geolocatable position. Apart from civil considerations, can the Company be considered to have defrauded the service holder by using the minor as an instrument?

  1. In a large investment company which, through an artificial broker, buys and sells securities on the stock market, with the aim of maximising profits in certain margins and time periods, the artificial broker enters massive sell orders, as it projects that the price of the security will fall by imitating the sell orders of other traders. Subsequently, the artificial agent makes purchases at lower prices, as it projects that the security will exceed the selling prices within a certain period of time. Is the company committing an offence of altering financial market prices as provided for in article 284.1.3 of the Criminal Code?

The company is autonomous from the material constraints of natural persons (the risk on its own assets), i.e. it acts in the way it does, precisely because, by legal definition, it is not the sum of the physical subjects that participate in it. It is, conversely, an entity that only pretends to be in order not to be the sum of the physical subjects that make it up. And it makes decisions on a different level from that of the physical persons that compose it, even though, as a subject, it does not have the reflexive capacity or self-awareness of an artificial agent.

On the basis of the corporate singularity set out above, it seems logical that the company can carry out typical acts on its own. Otherwise, we would be endorsing a kind of strict liability for the acts of others.

However, it may be that the act that is directly attributed to a legal person is that which, phenomenologically, is carried out by an artificial agent (not, in principle, a natural person). The construction of our Art. 31 bis PC seems to presuppose a human element, but this intuition can be circumvented and the legal person itself can be considered as committing the typical act carried out by an artificial agent, either by way of Article 31 bis a), insofar as the artificial agent can be considered as one of those “authorised to take decisions on behalf of the legal person or having powers of organisation and control within it” or by way of Article 31 bis b), insofar as the artificial agent is “subject to the authority of the natural persons referred to in the previous paragraph (and) have been able to carry out the acts because of a serious breach by them of the duties of supervision, monitoring and control of their activity”.

However, Article 31 ter 1 PC specifies that “the criminal liability of legal persons will be enforceable whenever it is established that an offence has been committed by the person holding the positions or functions referred to in the previous article, even if the specific natural person responsible has not been identified or it has not been possible to direct the proceedings against them”. A natural person, it says. And again, persons are mentioned in Article 31a(2) and (4). Should the references to natural persons be replaced by “human or artificial agents”? It would help, although it could be understood that we are over-virtualising the liability of the legal person. Another option for making the latter responsible would force us to understand, when it comes to autonomous artificial intelligence, that here (and not only in automated intelligence) there is a “natural person” behind it. This is not at all unreasonable, given the understanding that all artificial intelligence has a creator or, and this is relevant (more in the field of corporate liability), a person who must control it.

Even so (or in addition to this), it would be necessary to check the “organisational flaw” in the company that allows such liability. It is the criminal compliance (the compliance programme) that should indicate the programming limits in the company’s routines, so that actions in excess of this should be interpreted as incomprehensible phenomena for the company. The legal person would thus not make sense, in its internal routines, of an action that would be contrary to its social-ethical protocols; if it is not able to understand the action, no (criminal) liability of any kind can be asserted, since it can neither foresee nor be required to foresee what happens.

The legal person would not be liable, in any case, if its compliance complies with the rules governing the management of AI systems, from the perspective of the specific case, as required, for example, by the above-mentioned Independent High Level Panel of Experts on Artificial Intelligence, which, among other requirements, demands for a reliable AI “[…] to ensure [a] prevention of harm, including the prevention of harm, and to ensure [a] prevention of damage”. to ensure [a] prevention of harm”, as well as to “constantly assess and address […] throughout the lifecycle of AI systems” aspects such as their technical soundness and security, proper privacy and data management, transparency, etc. , and an adaptation of key requirements to the specific application of each AI. In other words, the aim would be to force the transfer, in the creation and use of AI, of criminal prohibitions into a computer language of “logical validity” and “terminological precision” that guarantees (in a probabilistically high degree) the absence of criminal risk.

Thus, in the examples given above,

1) A compliance system should be required, even from autonomous evolutions, which assesses the risk of the contract being signed by a minor, makes a double verification system compulsory and prevents any variation of this requirement.

2º In the second example, the compliance system should assess the risk of alteration of the prices of financial products and propose a system for avoiding mass sales in accordance with the statistical assessment of the behaviour of other operators in the market (probability that the autonomous sale order of the own AI system is the main generator of a downward trend that causes some of the consequences prohibited by art. 284.1.3 CP). Or, in order to prevent the risk inherent to any business model from being avoided, to guarantee at least a software that in its autonomous action prevents the generation of “a profit in excess of two hundred and fifty thousand euros” or that “a serious impact is caused on the integrity of the market”.

In short, the implementation of artificial intelligence in social routines is evident, and it is clear that causing harmful results through the mediation of artificial agents has become a legal problem of the first order. From a criminal perspective, we can identify a certain liability gap in relation to autonomous AI systems.

However, bearing in mind that the most serious cases of the use of autonomous artificial agents are probably in the field of corporate business decisions, here too, as in the field of corporate criminal liability arising from the actions of a natural person, the lack of a (good) regulatory compliance programme must become the defining element (with certain non-definitive literal obstacles being overcome) of such liability.

Continue reading

Understanding Cargo Ships: Types and Functions

Cargo ships, also referred to as freighters or cargo vessels, play a pivotal role in transporting large volumes of goods from one port to another around the globe. Their function is indispensable in the global supply chain, facilitating the...

Understanding the Implications of Challenging Foundation Board Resolutions

Introduction to Foundation Board Resolution Challenges In legal literature, there's a dearth of studies concerning the contestation of decisions made by foundation boards. Professor La Casa is taking the initiative to address this gap in our legal understanding. Below, I'll...

Cargo Ships: Types and Roles in Global Trade

Understanding Cargo Ships: Their Roles and Various Types Cargo ships, also referred to as freighters or cargo vessels, play a pivotal role in transporting large volumes of goods from one port to another across the globe. Their function is indispensable in...