Deep fake image on laptop

Research supported by a University of 1024ºË¹¤³§ Cyber Security expert has led to the development of ‘DeepGuard’ - a software tool that can identify AI generated images

3 March 2025

7 minutes

Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what’s real and what’s fake is getting harder and harder.  

A research collaboration involving the University of 1024ºË¹¤³§â€™s Artificial Intelligence and Data Science (PAIDS) Research Centre, has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image. 

The ’, combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labelled data, making smarter and more reliable predictions.

It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias. 

DeepGuard has been developed by a research team led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the in Algeria, and involving Dr Stavros Shiaeles from the University’s PAIDS Research Centre and School of Computing

Dr Shiaeles said: “With ever evolving technological capabilities it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. People are also profiteering disingenuously on social media platforms like TikTok where images of models are being turned into characters and animated in different scenarios in games or for entertainment. 

“DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts.â€

, published in The Journal of Information Security and Applications, will also support further academic research in this area, with .

During its development, the team reviewed and analysed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images - such as changes in expression, pose, voice, or other facial or bodily features. 

More like this...