top of page

Shaping the Future of AI in Policing: ALIGNER's Pragmatic Approach

Author

Laura Galante

Jul 11, 2023

ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?

ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?

 

In a world where technological advancement is swift and relentless, the EU-funded security research project ALIGNER focusses on the integration and implications of Artificial Intelligence (AI) in law enforcement, looking at a more immediate, shorter-term time frame.

 

Project Coordinator Daniel Lückerath is pragmatic: “The rapid developments in AI technologies and their increasing public availability, as well as permeation throughout many aspects of society – from your fridge to your smartphone – make reliable foresight very far ahead almost impossible”. Hence, ALIGNER bases its strategies on the imminent needs, challenges, and opportunities that law enforcement confronts, considering both the potential misuse of AI and also its constructive use by police and law enforcement in societal contexts.

 

ALIGNER focuses on a not-too-distant "future scenario" where AI is an integral part of daily life, and plays a pivotal role in policing and law enforcement. This approach, enriched by input from advisory boards and research collaborations, has earmarked significant areas where AI's potential criminal usage might be prominent. These areas include disinformation and social manipulation, cybercrimes against individuals and organisations, and the application in vehicles, robots, and drones.

 

ALIGNER has identified sectors where AI could revolutionise policymaking and law enforcement practices. Promising applications include data handling processes, such as incident and crime reporting, digital forensics for obtaining digital evidence, improving incident reaction and response mechanisms, crime detection, and the use of AI in vehicles, robots, and drones.

Based on these identified sectors, ALIGNER works along four distinct "narratives" or topical scenarios, intertwining different aspects across these highlighted categories, giving guidance to the related work in the project. “For each ‘narrative’ that ALIGNER works on, we identify suitable AI technologies.” Lückerath explains. “These are briefly described in so-called scenario cards that summarise the relevant information – what the technology is about, how effective it is, and how robust.” The narratives as discussed thus far have revolved around disinformation and social manipulation, cybercrime against individuals using chatbots, and one on AI-enabled malware, with the fourth one currently being in discussion within the project team. Based on these topical scenarios, assessment methods for the technical, organisational, as well as ethical and legal implications were developed.

 

As an example, the first ‘narrative’, dealing with disinformation and social manipulation, assumes that criminals use AI for phishing attacks to gather personal data. Through phishing attempts, they identify and attack high-value targets (‘tailored phishing or spear phishing’). The goal of these attacks is to manipulate or coerce targets to gain unauthorised access to computer networks, e.g., of election campaigns, large research companies, or industry organizations. Phishing attacks may involve online attempts to persuade or trick individuals into divulging passwords or access codes or, if the opportunity arises, using harvested data to subject them to blackmail or coercive threats. Besides targeted phishing attacks and data harvesting, criminals may disseminate selective misinformation and disinformation apparently emanating from official or well-informed sources. Disinformation uses artificially generated videos, images, text, and sound, including deep fakes of public figures, and is generated by AI-fuelled ‘bots’. To counter the threat of phishing[KN(2] , law enforcement agencies also bring AI: They use veracity assessment methods to detect disinformation, then employ deanonymisation techniques like authorship attribution and the geolocation of images to identify from where the disinformation originated. This is supported by techniques for the detection of synthetic images and videos.

 

In the second ‘narrative’, a crypto romance scam, a criminal contacts a victim via an online chat, grooming the victim into believing the scammer is a genuine ‘friend’ and subsequently extracting crypto currency out of the victim. These scams might be supported by generative AI models like ChatGPT, Dall-e, or Midjourney, creating fake profile pictures, voices, and videos, or automating text generation in multiple languages. In the future, the creation of profiles, targeting of individuals, generation of fake crypto currency company sites, and grooming might even become highly automated. To address these threats, law enforcement agencies themselves need to deploy AI-based models to detect generative content, to support automatic detection of scammer profiles as well as scamming victims, to detect voice clones, or to detect crypto currency laundering.

 

ALIGNER collaborates with professionals from policing, academia, research, industry, and policymaking, including legal and ethics experts, organised in two advisory boards: one for law enforcement expertise, and the other gathering research, industry, and ethics authorities. “To receive a reliable assessment, we need many different experts from different European countries to ensure that we reflect a broad view on these emerging technologies and scenarios. This takes time, especially considering different languages and expertise” Lückerath says.

 

While AI can be misused by criminals, it also greatly aids law enforcement in combating crime, such as by reducing errors, automating time consuming tasks, identifying potentially suspicious behaviours, and even speeding up legal procedures by predicting possible outcomes based on past cases. However, care must be exercised to avoid AI creating biases and discrimination, as certain geographic areas or groups might be unfairly targeted, leading to a disproportionate increase in arrests.

 

This is why ALIGNER has developed the Aligner Fundamental Rights Impact Assessment (AFRIA) tool to enable law enforcement authorities to further enhance their already existing legal and ethical governance systems. This is a method designed to help law enforcement follow ethical guidelines and respect basic rights when using AI systems in their work. It consists of a fundamental rights impact assessment template and an AI System Governance template that help authorities identify, explain, and record possible measures to mitigate any potential negative impact the AI system may have on ethical principles. While in the EU there is no legal obligation to perform such assessments, the AFRIA complements already existing or potential legal and ethical governance systems, such as the forthcoming 2021 AI Act proposed by the European Commission. Depending on the results of negotiations during trialogues, Lückerath explains, “ALIGNER would like to see a practicable and sensible AI regulation that…enables law enforcement agencies to use AI in an ethical, legal, and socially acceptable way, and still allows us to make use of AI technologies for the betterment of society.”

 

Lückerath envisions a future in which established national centres across Europe support law enforcement agencies with ethical, legal, and socially acceptable implementation and deployment of AI technologies, as well as oversight bodies that would use a harmonised framework to assess AI technologies before, during, and after their deployment. In this envisaged future, a harmonious blend of technology and ethics may very well redefine the contours of law enforcement, empowering agencies with the tools of AI while maintaining steadfast commitment to ethical and legal standards.

 

17211

1

0

EXTERNAL LINKS


anhtuanhoang5

July 26, 2023 at 11:10:46 AM

top

New to foresight or want to deepen your knowledge on methods? Interested in the latest research and videos from the Futures4Europe community? Find out more in our futures literacy database!

Eliza Savvopoulou

Eliza Savvopoulou

As a partner of the Eye of Europe Project, Helenos will implement its first pilot on Fashion Futuring, investigating potential links among objects, fiction, culture, and systems to understand how the values of the systems/societies can shape the future of fashion.
First Pilot on Fashion Futuring in the works!
First Pilot on Fashion Futuring in the works!

1453

0

0

Iva Vancurova

Iva Vancurova

Eye of Europe Mutual Learning Event: Policy Oriented Communication of Foresight Results
Eye of Europe Mutual Learning Event: Policy Oriented Communication of Foresight Results

1977

0

0

Renata Mandzhieva

Renata Mandzhieva

Between 16th - 19th of July, Eye of Europe consortium partners AIT and Fraunhofer ISI attended the 2024 conference by the European Association for the Study of Science and Technology (EASST) and the Society for Social Studies of Science (4S).
Foresight at EASST-4S 2024 Conference in Amsterdam
Foresight at EASST-4S 2024 Conference in Amsterdam

1980

0

1

Dana Wasserbacher

Dana Wasserbacher

The conference took place from 5-7 June 2024, and aimed to explore what conceptions of “better worlds” are being pursued by STI policies.
Embedding Foresight in Next-Generation Transformative Innovation Policies
Embedding Foresight in Next-Generation Transformative Innovation Policies

439

0

0

Be part of the foresight community!

Share your insights! Let the Futures4Europe community know what you are working on and share insights from your foresight research or your foresight project.

bottom of page