A European Positive Sum Approach towards AI tools in support of Law Enforcement and safeguarding privacy and fundamental rights
The project is a 24-month Coordination and Support Action bringing together security practitioners, AI scientists, ethics and privacy researchers, civil society organisations as well as social sciences and humanities experts with the purpose of consolidating knowledge, exchanging experience and raising awareness in the EU area, under a well-planned work methodology. The core vision of pop-AI is to foster trust in AI for the security domain via increased awareness, ongoing social engagement, consolidating distinct spheres of knowledge, and offering a unified European view across LEAs and specialised knowledge outputs (recommendations, roadmaps, etc.) while creating an ecosystem that will form the structural basis for a sustainable and inclusive European AI Hub for Law-Enforcement.
The project aims to utilize existing knowledge, but also an extensive set of studies, to identify and record the direct and indirect stakeholders of the “security and AI” setting, as well as their respective points of view (concerns, perceived opportunities, challenges). This recording aims to further delve into the dynamic interactions of these stakeholders and ensure appropriate gender and diversity representation in the participatory processes. This way Pop-AI will tap into the rich knowledge of security practitioners, civil society organisations and citizens as well as social sciences and humanities experts, to define appropriate interactions and materials that will allow co-creation within the ecosystem.
Coordinator: National Center For Scientific Research “Demokritos” Csrd (GR), Trilateral Research Limited (IRL), Eticas research and innovation (ES), KEMEA (GR), CERTH (GR), Technische Universiteit eindhoven (NL), Zanasi Alessandro SRL (IT), European citizen action service (BE), Hellenic Police (GR), HFOD (DE), Gobierno Vasco Departamento seguridad (ES), APZ (SL).
The project in Torino
PLTO will be involved in research and analysis of the current AI functionalities used in security, with the aim of highlighting the technical characteristics of different solutions and functionalities. Specifically, the use of AI in the police forces will be explored by focusing on organisational and related issues such as the impact of technology processes on the health and safety, autonomy or responsibility of LEAs.
Research and analysis will be implemented with a Stakeholder Policy Labs, in order to facilitate exchange between relevant LEAs and related experts, develop ideas for smart policies and test the solution to identified controversies in experimental models.
- Analysis of theoretical legal, ethical, social framework related to the use of AI tools in the security domain
- Empirical Research on the AI tools in the security domain, raising awareness, societal acceptance and ethics
- Final recommendations/best practices/white book, coordination and networking activities with Stakeholders across Europe