
The use of AI for scams, image manipulation, fake websites, viruses, malware, and ransomware is becoming increasingly common. Criminals also use AI to create toxic content for social media to gain views or defame victims.
Thanks to AI, cybercriminals now expend less effort than before to prepare for an attack, according to Ngo Minh Hieu, Director of Chong Lua Dao (Anti-Scam) Project, at an event on May 19.
Citing a UN’s study, Hieu noted that criminals can buy deepfake software for as little as $25-30 to impersonate police or authorities for scams.
In Cambodia, a group of criminals were found developing a tool so advanced that it only needed a script and a list of Facebook accounts to automatically interact with victims.
If victims respond, the tool will follow the script, enabling criminals to deceive thousands of people daily.
According to Hieu, deepfake and deepvoice just need one photo or a 20-30-second audio clip to create realistic scam videos.
He cited cases with losses of tens of millions of dollars due to deepfakes, like an employee at a multinational company in Hong Kong (China) being tricked into transferring $25 million to a scammer introducing him as a financial CEO in a video call.
The deepfake scam is expected to worsen as technology grows more sophisticated. However, internet users are unwittingly making it easier for criminals to impersonate them.
This stems from the habit of casually sharing photos or public friend lists on platforms like Facebook, TikTok, and Instagram.
Scammers also trick users into friending them and sharing photos, which are then stolen to impersonate the victim or create explicit images for extortion.
Hieu advises users to set photo visibility to “friends only” and friend lists to “only me” to prevent data leaks.
He also calls for responsible use of AI and social media, limiting personal information sharing for safety. If suspecting a call as using deepfake, he suggests observing mouth movements or asking the caller to stand, sit, or turn to verify identity.
Prior to that, at a cybersecurity conference in June 2024, a VNPT eKYC representative noted that cybercriminals use AI-generated deepfakes to bypass online authentication, posing challenges for service providers.
Among four common eKYC fraud methods in Vietnam, deepfake is the most complex, as it can use AI to analyze a person’s face and voice to recreate or edit images and videos with new actions or gestures.
VNPT has utilized AI to develop solutions against eKYC fraud, including face comparison, face search, fake face detection, fake document detection, voice verification, and data analysis to find abnormalities.
Trong Dat