by AKANI CHAUKE
JOHANNESBURG – CYBER criminals are forecast to make the most of the Valentine’s Day to pounce on lonely hearts looking for love in the digital world.
An executive has advised that the searching for romance online has ever been a treacherous journey, fraught with scammers and catfishers but the rise of deepfakes, the dangers have only escalated.
Doros Hadjizenonos, Regional Director Southern Africa at Fortinet, said as cyber criminals harness the power of artificial intelligence (AI) to carry out their malicious schemes, the risk of being misled by seemingly genuine individuals has never been higher.
He said tricksters had been defrauding victims for generations but the emergence of ever more sophisticated technology is enabling them to do so faster, in greater numbers, and at lower risk to themselves.
“Attackers are even more likely to strike at romance-focused times like Valentine’s Day,” Hadjizenonos said.
Valentine’s Day is marked on February 14 but the entire month is otherwise known as the month of love.
Romance scams usually involve a cybercriminal developing a relationship with the victim to gain a victim’s affection and trust, and then using the close relationship to manipulate and steal from the victim, Hadjizenonos explained.
Some request intimate photos and videos and later use these to extort money.
Crooks also rely on social engineering tactics such as phishing, smishing or even vishing.
According to the Federal Bureau of Investigations’ (FBI’s) latest report on Internet Crime, between 2019 and 2021 there was a 25 percent increase in the number of complaints the agency received in the United States about romance scams.
Those affected lost a record high of $547 million in 2021 alone after being swindled by their “cyber sweetheart.”
Hadjizenonos stated AI is already used defensively in many ways, such as detecting unusual behaviour that may indicate an attack, usually by botnets.
However, this technology could also be used to create “deepfakes” – convincing hoax images, sounds, and videos.
Deepfake technology can be used within social engineering scams, with audio fooling people into believing trusted individuals have said something they did not.
It can also be used to spread automated disinformation attacks or even to create new identities and steal the identities of real people.
This could eventually lead to impersonations over voice and video applications that could pass biometric analysis posing challenges for secure forms of authentication such as voiceprints or facial recognition.
“As this technology become mainstream, we will need to change how we detect and mitigate threats, including using AI to detect voice and video anomalies,” Hadjizenonos concluded.
– CAJ News