Tuesday, November 5, 2024
HomeTech NewsA Safer Space for Professionals: LinkedIn's New Initiative to Combat Fake Profiles

A Safer Space for Professionals: LinkedIn’s New Initiative to Combat Fake Profiles

In an effort to address its false profile issue, the site has developed an AI picture detector to detect fake pictures.

LinkedIn introduced this week a new AI imagine detector with an accuracy rate of 99% for detecting fake profiles.

According to LinkedIn’s Trust Data Team, the new technique can detect fake profile photos and erase fake accounts before they reach LinkedIn members.

This latest security breakthrough comes only a few months after it was found that LinkedIn was used in more than half of all phishing assaults in Q1 2022.

A Safer Space for Professionals: LinkedIn's New Initiative to Combat Fake Profiles

Why do people create fake LinkedIn profiles?

LinkedIn, as well as Twitter, has recently experienced issues with the number of fake profiles on its service. The software discovered and eliminated 21 million fake accounts in the first half of 2022 alone. 

But why are all these fake profiles appearing? Some use it to build trust among visitors to their websites, while others do it for SEO purposes, under the mistaken idea that Google ranks articles with writers higher than those without.

Whatever the motive, there’s little question that developments in AI have made it much easier to establish a fake profile.

Fake Accounts Have Become More Difficult to Detect 

This new technique is the result of extensive study into recognising the fundamental differences between AI-generated faces and actual faces, which most humans are unable to detect.

LinkedIn monitors unwelcome behaviour that may constitute a security concern, such as fake profiles and content guideline breaches. However, until recently, the sophistication of AI-generated graphics was hard to identify.

The key to resolving this has been knowing just what to search for. According to LinkedIn, AI-created images all contain the same qualities, which they label “structural differences”. These structural discrepancies do not exist in real photos.

A test of 400 AI-generated photographs vs 400 real pictures is mentioned in their blog post as an example. While the genuine photographs were clear, the AI-generated ones were more blurry, demonstrating that the regions surrounding the eyes and nose in fake images are extremely comparable.

While AI looks to be relentless in terms of possible security dangers, LinkedIn’s recent breakthrough might be viewed as a win against fraudulent data.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments