The Catalan Data Protection Authority (APDCAT) has organised the session ‘Artificial Intelligence. An ethical overview’ as part of the Mobile World Congress, to discuss the challenges of implementing AI technologies to prevent discrimination or manipulation of people.
The director of the APDCAT, Meritxell Borràs i Solé, opened the session, which included the intervention of Ricardo Baeza, Director of Research at the Institute of Experiential AI at Northeastern University, Silicon Valley, and a member of the DATA Lab at Khoury College of Computer Sciences, as well as other experts in the field.
Borràs spoke about the opportunities and also the risks involved in artificial intelligence systems and applications, which have become popular in society in the last decade. “We cannot appreciate its impact or the risks it hides, because AI transforms the world in an unprecedented way,” he said.
The goal, among other things, is to avoid the biases that can arise with the design of algorithms, which can influence or discriminate against us, the indiscriminate use of mass video surveillance with facial recognition and the control of social behaviour.
Borràs added that, from a technical point of view, the available technology needs to be improved in terms of transparency and reliability, and also pointed out the need to have legal mechanisms to guarantee citizens the protection of their rights.
The most urgent, however, is to “maintain a critical view of AI”, which questions the operation of systems to find those elements that have a negative impact on people that are not obvious from a technical point of view. “Therefore, ethics must be incorporated into AI, and in the process we must be demanding, beyond proposing ethical codes,” said Borràs.
On the other hand, Ricardo Baeza-Yates focused his intervention on the ethical challenges of artificial intelligence. Specifically, he spoke about discrimination, phrenology, unfair digital commerce, models that do not understand semantics, and the indiscriminate use of computer resources.
Baeza-Yates has emphasized three types of biases: those of the data, those of the algorithm itself, which sometimes amplifies that of the data, and those of the interaction between the system and its users, which are a combination of algorithmic and cognitive biases.
He also spoke about the challenges to be addressed in this area, such as the principles to be met by software, cultural differences, regulation and individual cognitive biases.
Baeza-Yates said that in addition to the ethical questions that a company must ask before launching a product or service, it is necessary to ask other related questions: if they have the proper political and technical competence, if they have done a thorough technical analysis and whether they have weighed the possible individual and social impacts.
On what society can do to meet these challenges, Baeza-Yates has insisted on responsible AI, which “implies a process that involves from the beginning all the actors in the design, implementation and use of software”.
They are, he said, “a multidisciplinary team, from experts in the problem to be solved to computer scientists”.