Real Or AI Trickery? The ‘Lion Attack’ Video That Fooled Millions Online | Tech News


Last Updated:

A growing number of mobile apps and tools, including Reface, FaceApp, DeepFaceLab, Faceswap, Lensa, and Wombo, allow users to create realistic synthetic media within minutes

The clip, which begins as a lighthearted vlog of the woman filming what she believes to be a dog, turns horrifying when the animal suddenly attacks. (Instagram/@taii_vloger)

The clip, which begins as a lighthearted vlog of the woman filming what she believes to be a dog, turns horrifying when the animal suddenly attacks. (Instagram/@taii_vloger)

In recent weeks, a series of alarming AI-generated videos have flooded social media, triggering widespread confusion and concern. One particularly viral clip shows an elderly woman being attacked and dragged away by a lion, a video that left many shocked, while others questioned its authenticity.

The clip, which begins as a lighthearted vlog of the woman filming what she believes to be a dog, turns horrifying when the animal suddenly pounces. However, digital forensics experts have confirmed that the footage is not real. It is a deepfake, an AI-generated video crafted to mimic reality through sophisticated editing.

Such videos, often designed to deceive or amuse, are becoming increasingly common. They rely on artificial intelligence models that can alter faces, voices, and even body movements, creating hyper-realistic illusions. In the viral “lion attack” video, for instance, real footage was merged with AI-generated imagery to simulate the violent encounter.

Deepfakes first gained notoriety years ago when a staged video showing a lion attacking a hunting couple went viral in 2016. It was later revealed to have been produced by an Australian studio as part of an experiment. More recently, a video that appeared to show a lion feasting inside a supermarket in India was confirmed to be another AI fabrication.

Watch the viral video:

The ease of making such videos has amplified their reach. A growing number of mobile apps and desktop tools, including Reface, FaceApp, DeepFaceLab, Faceswap, Lensa, and Wombo, allow users to create realistic synthetic media within minutes. These platforms use advanced neural networks such as Generative Adversarial Networks (GANs) and diffusion models to generate lifelike visuals and voices.

However, while these tools can be used for entertainment, experts warn that their misuse can have serious consequences. “AI videos have become so seamless that even professionals can struggle to detect them at first glance,” noted journalist and fact-checking expert Jatin Gandhi, who shared simple methods to identify fake videos.

According to Gandhi, one quick way to test authenticity is what he calls the “9-second check”. Around the nine-second mark, look for distortions like misaligned hands, awkward body movements, or unnatural facial transitions, he said. Other red flags include faces that fail to blink naturally, mismatched lighting or shadows, and artificial-sounding audio without natural breathing or emotion.

To verify suspicious videos, experts recommend using digital tools like Deepware Scanner, MIT Detect Fakes, and Deepfake Detector, which analyse uploaded clips for synthetic content. Cross-checking sources through reverse image searches can also help confirm whether a video has been manipulated or repurposed from older footage.

Legal experts caution that India currently lacks a dedicated law addressing deepfakes, though multiple provisions of existing legislation apply. Under the Bharatiya Nyaya Sanhita (BNS), Sections 499-500 cover defamation (punishable with up to 2 years of imprisonment or a fine), while Sections 465 and 469 address forgery. The Information Technology (IT) Act, 2000, also includes Section 66D, which prescribes up to 3 years in jail for computer-related fraud.

Social media companies, meanwhile, are obligated under the IT Rules 2021 to report and remove misleading or harmful content. In 2023, the government issued an advisory directing platforms to take stronger action against deepfakes.

Several high-profile incidents have already prompted legal intervention. Actor Rashmika Mandanna’s deepfake video led to an FIR in Delhi, while a court order in Anil Kapoor’s case prohibited the misuse of his image and voice. Aishwarya Rai Bachchan reportedly filed a Rs 4 crore lawsuit after deepfake clips featuring her appeared on YouTube.

Cybercrime experts advise victims to file complaints with their local cyber cells or the Ministry of Electronics and Information Technology’s grievance portal. The government is also working on new AI regulations that may mandate clear labeling and traceability for synthetic media.

As deepfakes grow more sophisticated, the challenge for internet users lies in balancing curiosity with caution. What appears to be a shocking moment, like a woman being carried away by a lion, might be nothing more than a computer’s illusion.

News tech Real Or AI Trickery? The ‘Lion Attack’ Video That Fooled Millions Online
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Read More



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *