Select Page
  • The Commission keeps its White Paper on Artificial Intelligence open for consultation, which promotes the development of secure and reliable AI.
  • We The Humans, an independent think tank, support these measures, although we ask to extend the ethical considerations that affect “high risk” AI systems to any AI technology.
  • The independent think tank asks to entities an AI governance strategy for the social interests that make the excellence and ethics of the systems possible , as well as regulatory compliance.

Madrid, April 7, 2020.- The members of the independent Think Tank We The Humans support the advancement and clarify certain aspects of the European Digital Strategy, based on the data value, with the White Paper on Artificial Intelligence, published by the European Commission, last 19 February. Both documents were presented by the President of the Commission, Úrsula von der Leyen, with the aim of defining a data strategy and human-centered AI. According to the president, “Europe should advocate for technology that benefits people, a fair and competitive economy and an open, democratic and sustainable society.”

The importance of the AI ​​White Paper, open to public consultation until May 19, lies in the fact that it is the first time that a legislation for this technology is possible and it could go public in the last quarter of 2020. For the President of the EC it is a complex project because “it covers everything, from cyber security to critical infrastructures, from digital education to capabilities, from democracy to the media. I want a digital Europe that reflects the best: to be open, fair, diverse, democratic and with self-confidence.

 

We The Humans defend this advance and contemplate some aspects of the document.

In first place, the White Paper talks about legislating AI-based systems, although it does not directly provide a legal definition. Thus, “it identifies the importance of algorithms and systems, although it should also include the hardware that makes data feeding and exploitation and the operating of the applications themselves possible.” explains Juan Ignacio Rouyet, president of We The Humans and Director of Quint Services.

It should also be considered that all these systems learn proportionally to the volume of data they handle, since otherwise they will end up learning independently and in some cases, without the need of supervision. “It is necessary to provide a legal definition contemplating and including the maximum possible spectrum of technology and everything that involves its application,” says Juan Ignacio, president of the think tank.

 

Risk-based scope

Another relevant aspect is the model raised by the White Paper. The scope is based on risk. For high risk applications, it should meet some mandatory requirements. For the rest, based on the identified impact, the regulatory framework will be applied to a greater or lesser extent.

According to the draft document, the risk and the degree of danger should be based on three variables.

  • The sector
  • The intended use
  • If it involves significant risk.

As a first step, the creation of an industries directory that could be classified as high risk is proposed. Those sectors will be identified based on the criticality of the activity for society, such as health, transport or energy.

Regarding the intended use, the practical applications of these systems in the different sectors and activities should be defined, so that the quantification or determination of this risk will also depend on its possible impact on:

  • Right of the individual or company
  • Material or immaterial damages (injury, death).
  • Effects that cannot be avoided by natural or legal persons

Additionally, some exceptional situations are always considered as high risk, as it relates to:

  • Situations affecting workers’ rights or specific applications on consumer rights
  • Use of biometric identification and intrusive surveillance.

Therefore, according to the White Paper, only these cases in which there may be a high risk regarding the rights and freedoms of individuals or critical sectors of activity, should be expressly legislated and contain conditions and requirements for mandatory compliance. Regarding the rest of the systems not categorized as high risk, Should they comply with specific regulation?

The systems or applications that have been categorized as high risk will have mandatory requirements and conditions related to; 1) data training and learning, 2) record keeping and security, 3) provision of information, 3) strength and accuracy, and lastly 4) human supervision.

According to María González, secretary of We The Humans and ECIJA’s IT and Privacy Partner: “Although in first instance AI-based systems may not meet the conditions for them to be considered high risk, a regularization should apply to all systems for the reason that all can affect rights and freedoms of individuals. Therefore, a general regulation that affects any development based on AI is needed, with greater or lesser demands depending on the sectors, intended uses or identified risks ” Maria considers that “the requirements established for high-risk systems should be a basic measure, which should be met in all areas of artificial intelligence.” “Whether it is a chatbot in a retail company or a risk assessor in a bank, both should comply with specific minimum conditions, which can be qualified or completed based on the specific impacts and risks,” Maria ads.

 

Roles and responsibilities

Beyond the specific aspects of regulatory compliance, members of the We The Humans Association defend that a government should be established by companies which exploit and develop based AI systems to guarantee the requirements that EC ask for, regarding issues such as learning data or human supervision. It is useful to define specific roles that are responsible for compliance.

“The government of AI needs the establishment of functional structures in the entity, processes or workflows, and relationship and control mechanisms that guarantee, in any case and in the long term , periodic controls and evaluations, continuous improvement, and, therefore, compliance with applicable legal obligations ”, concludes Juan Ignacio, member of We The Humans.