
M. Àngels Barbarà
Artificial intelligence: Automated decisions in Catalonia - link to the document
The Sala Cotxeres of the Palau Robert in Barcelona (Passeig de Gràcia, 107) was the venue for the presentation of the report “Artificial intelligence. Automated decisions in Catalonia”, an ambitious work of research and dissemination promoted by the Catalan Data Protection Authority that was born with the aim of analysing the state of artificial intelligence (AI) in Catalonia from the perspective of protection of personal data. This is a pioneering work on the situation of artificial intelligence in Catalonia, analysed from an ethical and personal data point of view. More specifically, the study focuses on automated decision algorithms.
This report has an eminently instructive vocation and makes an effort to inform and raise public awareness so that they can make responsible use of their personal data within the framework of a society in which computers autonomously make thousands of decisions that affect us every day. And it does so starting with a didactic explanation of automated decision algorithms, their practical implications and the risks in terms of data protection that the digital context poses to us.
The publication of the report brings together experts in ICT and data protection
The publication of the report “Artificial intelligence. Automated decisions in Catalonia" was attended by the director of the Authority, Maria Àngels Barbarà; the journalist Karma Peiró; and the coordinator of Technology and Information Security at APDCAT, Jordi Soria. The presentation event ended with an open debate in which experts in the field of ICT and data protection took part, as well as experts who collaborated in the research work.
Barbarà, director of the Catalan Data Protection Authority, highlighted in her speech the potential benefits for an AI society, recalling that, in a context where technological evolution is based on the massive and intensive use of data, the right tools must be found to protect the rights and freedoms of people. In this sense, Barbarà exposed the wide-spread debate, at a global level (European Union, Canada, USA, etc.), on whether current regulatory models are sufficient to protect rights and freedoms in the set of areas where AI and automated decision algorithms are used.
In the framework of the right to data protection, the director referred to the European Data Protection Committee, which points out that the GDPR covers the creation and use of most algorithms and that the existing legal framework allows for facing many of the potential risks and challenges associated with the processing of personal data using algorithms. And that, therefore, it is necessary to focus on the development of existing standards, especially the requirements of transparency, proactive responsibility and data protection impact assessments, in the context of automated decision algorithms.
The director of the APDCAT also stated that the virtuality of algorithms for extracting patterns and profiles from personal data can jeopardize not only the rights of a specific person, but also the rights of certain groups or collectives. And in these cases we must also deal with the generation of profiles that create situations of discrimination, against people and groups based on parameters, identified by algorithms, which are invisible to us and therefore difficult to detect.
Barbarà points out that in the face of the new reality of the technological society, new approaches are required and we must be willing to move forward with the challenges that AI poses to our data protection model, being proactive for better protection of people in a moment when they have their way of life changed without being fully aware of it. She also stressed that more than ever, we must be positive and dare to think beyond what we know to build the model of society that we want, because when technology is integrated into people's lives and modulates society, we must be willing and able to make a constructive assessment and critique focused on the human being.
Barbarà ended her speech by stating that when technology advances by using our data, it must also advance by guaranteeing our rights and freedoms.
About the report
This need for awareness and pedagogy is shown in a very detailed report that combines the analytical part with a practical aspect based on real examples. In this sense, the document consists of two clearly differentiated blocks.
The first block is a research study on the use of automated decision algorithms in Catalonia, conducted by journalist Karma Peiró, who specializes in information and communication technologies. The aim of this first block is to inform the public, in an informative tone, of the advantages and risks of automated decisions. To help the reader better understand these implications, Peiró complements her analysis with some fifty practical and real examples of the application of automated decision algorithms, addressing areas as diverse and everyday for citizens as health, education, work, banking, trade and so on.
With the inclusion of all these examples, Peiró highlights the great benefits that technologies are bringing to our society in such important areas as the diagnosis of diseases, the granting of social assistance, the management of hospitals or the improvement of school learning, but she also points out the dangers of discrimination and violation of personal data that may be associated with these algorithms.
This block ends with the ethical reflections of some thirty Catalan experts—top figures in academic research into artificial intelligence—and entrepreneurs and professionals related to artificial intelligence.
The second block focuses on automated decisions from a data protection perspective. Data is one of the foundations on which AI has been built. An abundance of data has made it possible to create AI systems capable of performing a multitude of tasks without explicitly programming them. This is so-called machine learning, which analyses available data and learns from the accumulated experience. Data is also essential when applying these systems, as each new application is determined by the data that defines the new specific case.
When the data is personal, GDPR must be taken into account, which governs limits on processing, the obligations of those responsible and the rights of individuals. In this case, there is clearly friction between the need for algorithmic data and the GDPR. One of the main topics in the second block of the report presented today is precisely the analysis of this friction.
The second block ends with a series of recommendations that seek—as far as possible—to make AI and automated decisions compatible with data protection. This is done both from the point of view of people, who seek to protect their rights, and from the point of view of organizations that apply these algorithms and need access to data.