AI and “new technologies” to the rescue of security

"Intelligent" video surveillance, crime prediction systems and other AI-based tools are more and more being used by police forces. The IAAP project, led by a team at IMT Atlantique, is examining the effectiveness of these systems and the changes they are bringing about.

IAAP for Artificial Intelligence and Police Activity

Digital technologies - AI in particular - are playing an increasingly important role in the police forces’ arsenal. “Intelligent” video surveillance, predictive mapping, and perhaps tomorrow facial recognition: all these tools are designed to facilitate police work. But these recent uses also raise a host of questions. They are the subject of a research project called IAAP for Artificial Intelligence and Police Activity, led by Florent Castagnino, lecturer in sociology at IMT Atlantique and project coordinator.

Florent Castagnino
Florent Castagnino

The development of these new tools is not neutral. “We might wonder, for example, whether the use of artificial intelligence and algorithms might not influence the very perception of acts of delinquency, or even their definition,” observes Florent Castagnino. “One of the risks is to focus police action on crimes and offenses that are well suited to algorithmic processing.” Other questions concern the shift in police strategy towards attempts to predict criminal acts. Or the arrival, via high-tech tools, of new players - IT companies, consultancies, start-ups - in the field of public safety. Let's not forget the issues surrounding individual freedoms and personal data protection, in a general context of the ever-increasing presence of algorithms in society.

Launched in June 2022 for a 4-year period, the IAAP project is receiving 300,000 euros in funding from the French National Research Agency (ANR). At IMT Atlantique, it involves a team of researchers from the Interdisciplinary Social Sciences Department (DI2S) and the Data Science Department (DSD), with researchers from the Centre de recherche sociologique sur le droit et les institutions pénales (Cesdip) at Université Versailles-Saint-Quentin, and Cresppa-CSU (Centre de recherches sociologiques et politiques de Paris/Cultures et sociétés urbaines) at Université Paris-8 (exchanges also take place with Université de Montréal and Institut National de la Recherche Scientifique du Québec). The work is based in particular on a comparative study carried out in two major cities in France and two in Quebec, each of which uses digital technologies for security purposes in varying forms.

Effectiveness in question

The project focuses on two technologies that are currently at the forefront of the security scene: first, the use of video cameras to monitor public spaces (in particular traffic lanes and public transport), in order to detect the occurrence of specific events; then, predictive mapping tools based on AI and machine learning - such as the PredPol system in the USA, which enables police patrols to be directed towards "hot spots".

Both types of tool are primarily designed to “optimize” police resources. The addition of software layers enables them to be enhanced with different functionalities (such as facial recognition - currently banned in France) and to automate certain tasks. This automation, moreover, is leading to a change in the scale and nature of security work: the “field” police officer could become a practitioner capable of juggling digital technologies.

Are these technologies really effective? In any case, they give rise to many reservations. Police officers, for example, fear that they could be used to “rationalize” their activities - by providing their superiors with detailed indicators of their actions. Civil society, for its part, is concerned about the risks to personal data protection and discriminatory bias. “Not to mention the fact that, from the point of view of criminological knowledge, we are still rather hungry,” adds the researcher.

Practically, the initial results are not very convincing. A number of studies have shown that the use of these tools, even when visible, does not result in a significant reduction in crime: “The installation of video cameras in a given area only leads, on average, to a reduction in the probability of a crime of around 10%,” notes Florent Castagnino.

Algorithm and learning constraints

Faced with this situation, authorities often tend to push the logic further, and increase their investments: more cameras, more high-performance equipment, more automatic alert systems... "Despite these efforts, the results of conventional video surveillance remain uneven according to the type of crime and location," observes Florent Castagnino. "In video-monitored parking lots, the probability of criminal acts occurring is reduced by around 37%. With algorithmic video surveillance, detection of zone crossing works fairly well. But some incidents go undetected, and alerts are triggered incorrectly... In reality, it all depends on the learning technique used, and the criteria selected for alerts - a person falling, density, abandoned luggage... The algorithm has to learn and progress little by little." This raises the question of how long images can be stored. Currently legally limited to 30 days, this requires high-capacity servers, with the result that images are generally overwritten after just a few days.

Being “different” may become suspect

For the moment, the costs are very high, but the results are mixed. "So far, international surveys are converging and indicate that “classic” video surveillance is only really used and conclusive in 1.5 to 3% of investigations or incidents on the public highway," points out Florent Castagnino. "Will AI really make it more effective? It is not certain." As for the prediction of criminal acts, it first comes up against a fundamental question: what is suspicious behaviour? "It is very subjective," says the researcher. "The algorithm memorizes recurring behavioural patterns. As a result, any behaviour that is unusual from a statistical point of view, even if it is not dangerous or illicit, is likely to appear suspicious. In short, it is the difference that becomes suspect. Conversely, for the algorithm, regular deal points may become the norm." In reality, it will take time to fully measure the effects of these tools on police activity.

Another area of work for the project concerns the reconfiguration of the private security market, with the emergence of "safe cities" in prospect. In this market, alongside the offerings of traditional security and defense companies, we are seeing the arrival of tech players (including start-ups). With tools that replace or refine pixel analysis, thanks to machine learning systems (particularly for counting, detection and visualization).

The IAAP project not only aims to produce knowledge on these various subjects: in the long term, the researchers also plan to formulate recommendations for police forces: for example, better distinguish criminogenic factors, avoid the risks of error induced by algorithms - such as excessive targeting of this or that category of people - and abandon them when they are too significant... "In terms of public safety, even if AI can be very useful in certain cases, there is no magic effect to be expected from it," says Florent Castagnino. "Ultimately, the development of these tools should be an opportunity to collectively rethink the prioritization of police missions."

Published on 06.01.2025

by Pierre-Hervé VAILLANT

Related testimonials
Researching and developing a transformation langua…
RF circulator based on 3D cavity magnonics
Developing specific contracts for artificial intel…
Data-driven optimization for Smart and Circular r…
Head of Department of Energy Systems and Environme…
Head of the Department of Automation, Production a…