ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?
ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?
In a world where technological advancement is swift and relentless, the EU-funded security research project ALIGNER focusses on the integration and implications of Artificial Intelligence (AI) in law enforcement, looking at a more immediate, shorter-term time frame.
Project Coordinator Daniel Lückerath is pragmatic: “The rapid developments in AI technologies and their increasing public availability, as well as permeation throughout many aspects of society – from your fridge to your smartphone – make reliable foresight very far ahead almost impossible”. Hence, ALIGNER bases its strategies on the imminent needs, challenges, and opportunities that law enforcement confronts, considering both the potential misuse of AI and also its constructive use by police and law enforcement in societal contexts.
ALIGNER focuses on a not-too-distant "future scenario" where AI is an integral part of daily life, and plays a pivotal role in policing and law enforcement. This approach, enriched by input from advisory boards and research collaborations, has earmarked significant areas where AI's potential criminal usage might be prominent. These areas include disinformation and social manipulation, cybercrimes against individuals and organisations, and the application in vehicles, robots, and drones.
ALIGNER has identified sectors where AI could revolutionise policymaking and law enforcement practices. Promising applications include data handling processes, such as incident and crime reporting, digital forensics for obtaining digital evidence, improving incident reaction and response mechanisms, crime detection, and the use of AI in vehicles, robots, and drones.
Based on these identified sectors, ALIGNER works along four distinct "narratives" or topical scenarios, intertwining different aspects across these highlighted categories, giving guidance to the related work in the project. “For each ‘narrative’ that ALIGNER works on, we identify suitable AI technologies.” Lückerath explains. “These are briefly described in so-called scenario cards that summarise the relevant information – what the technology is about, how effective it is, and how robust.” The narratives as discussed thus far have revolved around disinformation and social manipulation, cybercrime against individuals using chatbots, and one on AI-enabled malware, with the fourth one currently being in discussion within the project team. Based on these topical scenarios, assessment methods for the technical, organisational, as well as ethical and legal implications were developed.
As an example, the first ‘narrative’, dealing with disinformation and social manipulation, assumes that criminals use AI for phishing attacks to gather personal data. Through phishing attempts, they identify and attack high-value targets (‘tailored phishing or spear phishing’). The goal of these attacks is to manipulate or coerce targets to gain unauthorised access to computer networks, e.g., of election campaigns, large research companies, or industry organizations. Phishing attacks may involve online attempts to persuade or trick individuals into divulging passwords or access codes or, if the opportunity arises, using harvested data to subject them to blackmail or coercive threats. Besides targeted phishing attacks and data harvesting, criminals may disseminate selective misinformation and disinformation apparently emanating from official or well-informed sources. Disinformation uses artificially generated videos, images, text, and sound, including deep fakes of public figures, and is generated by AI-fuelled ‘bots’. To counter the threat of phishing[KN(2] , law enforcement agencies also bring AI: They use veracity assessment methods to detect disinformation, then employ deanonymisation techniques like authorship attribution and the geolocation of images to identify from where the disinformation originated. This is supported by techniques for the detection of synthetic images and videos.
In the second ‘narrative’, a crypto romance scam, a criminal contacts a victim via an online chat, grooming the victim into believing the scammer is a genuine ‘friend’ and subsequently extracting crypto currency out of the victim. These scams might be supported by generative AI models like ChatGPT, Dall-e, or Midjourney, creating fake profile pictures, voices, and videos, or automating text generation in multiple languages. In the future, the creation of profiles, targeting of individuals, generation of fake crypto currency company sites, and grooming might even become highly automated. To address these threats, law enforcement agencies themselves need to deploy AI-based models to detect generative content, to support automatic detection of scammer profiles as well as scamming victims, to detect voice clones, or to detect crypto currency laundering.
ALIGNER collaborates with professionals from policing, academia, research, industry, and policymaking, including legal and ethics experts, organised in two advisory boards: one for law enforcement expertise, and the other gathering research, industry, and ethics authorities. “To receive a reliable assessment, we need many different experts from different European countries to ensure that we reflect a broad view on these emerging technologies and scenarios. This takes time, especially considering different languages and expertise” Lückerath says.
While AI can be misused by criminals, it also greatly aids law enforcement in combating crime, such as by reducing errors, automating time consuming tasks, identifying potentially suspicious behaviours, and even speeding up legal procedures by predicting possible outcomes based on past cases. However, care must be exercised to avoid AI creating biases and discrimination, as certain geographic areas or groups might be unfairly targeted, leading to a disproportionate increase in arrests.
This is why ALIGNER has developed the Aligner Fundamental Rights Impact Assessment (AFRIA) tool to enable law enforcement authorities to further enhance their already existing legal and ethical governance systems. This is a method designed to help law enforcement follow ethical guidelines and respect basic rights when using AI systems in their work. It consists of a fundamental rights impact assessment template and an AI System Governance template that help authorities identify, explain, and record possible measures to mitigate any potential negative impact the AI system may have on ethical principles. While in the EU there is no legal obligation to perform such assessments, the AFRIA complements already existing or potential legal and ethical governance systems, such as the forthcoming 2021 AI Act proposed by the European Commission. Depending on the results of negotiations during trialogues, Lückerath explains, “ALIGNER would like to see a practicable and sensible AI regulation that…enables law enforcement agencies to use AI in an ethical, legal, and socially acceptable way, and still allows us to make use of AI technologies for the betterment of society.”
Lückerath envisions a future in which established national centres across Europe support law enforcement agencies with ethical, legal, and socially acceptable implementation and deployment of AI technologies, as well as oversight bodies that would use a harmonised framework to assess AI technologies before, during, and after their deployment. In this envisaged future, a harmonious blend of technology and ethics may very well redefine the contours of law enforcement, empowering agencies with the tools of AI while maintaining steadfast commitment to ethical and legal standards.