On closer inspection, she realized it was an AI-generated image used to illustrate a sentimental post. It was the second time she had nearly been fooled. Previously, she mistook a video titled “Retirees meet summer vacationers” for real-life footage.

Despite working in media and frequently encountering AI-generated content, Linh admitted that the technology has advanced so rapidly and realistically that it’s now difficult to tell what’s real and what’s not.

screenshot 1749203761184.png
Viral Facebook photo generated by AI. Screenshot.

Experts agree. Tools like Google Veo 3, Kling AI, DALL·E 3, and Midjourney now create photos and videos with near-perfect realism.

Do Nhu Lam, Director of Training at the Institute of Blockchain and Artificial Intelligence (ABAII), explained that with advanced multimodal technologies and sophisticated language models, these tools can synchronize visuals, audio, facial expressions, and natural motion to produce highly convincing content.

W-Ảnh ông Đỗ Như Lâm.jpg
Do Nhu Lam, Director of Training at ABAII, says advanced AI technologies produce highly persuasive content. Photo: Provided by subject.

Lam acknowledged AI’s potential in content creation, advertising, entertainment, and education. But this very ability to replicate reality blurs the lines between real and fake, raising significant ethical, security, and information governance challenges.

The post Linh encountered had nearly 300,000 interactions and over 16,000 comments, with users enthusiastically congratulating the “parents” without realizing the image was fake. Some more alert users criticized others for being duped by AI.

Across Facebook groups, AI-generated videos are increasingly common. The launch of Google Veo 3 has notably improved video quality, especially by matching lip movements to voice - making deception even harder to detect.

Stay alert in the AI era

W-ong Vu Thanh Thang.jpg
Vu Thanh Thang, CAIO of SCS, says we are now living in an era where everything can be faked. Photo: Provided by subject.

AI-generated media poses serious risks, especially for vulnerable or less tech-savvy users. Vu Thanh Thang, Chief AI Officer at SCS Cybersecurity Corporation, warned that criminals are exploiting AI in scams, biometric spoofing, and impersonation - fooling systems like eKYC and spreading misinformation using fake videos of celebrities.

Thang added that businesses are also targets. AI deepfakes can impersonate staff to bypass security, manipulate facial recognition, or mimic executives in order to damage reputations or initiate fraud.

Do Nhu Lam outlined three key risks of AI for individuals: financial scams, defamation, and misuse of personal information. For companies, he cited a case involving Arup, which lost USD 25 million after an employee at its Hong Kong branch was tricked into transferring funds during a deepfake video meeting.

Another grave consequence is the erosion of public trust. If people cannot distinguish real from fake, trust in media and reliable sources deteriorates. Lam referenced a 2024 Reuters Institute report showing global trust in news on digital platforms has fallen to its lowest point in a decade - largely due to deepfakes.

According to Thang, “We’re no longer talking about the risk of fake content - it’s a full-blown reality.” He urged the public to enhance their awareness and adopt protective behaviors, including understanding how AI works and how to coexist with it safely.

Both experts recommend users verify content before acting on it, learn to spot fabricated media, limit sharing personal information online, and report fake or harmful content. “Only with knowledge and vigilance can individuals protect themselves and contribute to a safer digital space in the AI age,” Lam said.

Du Lam