Fight Back Against Korea’s N Room 2.0! Ordinary People Use Magic Against AI Face Swaps

Fight Back Against Korea's N Room 2.0! Ordinary People Use Magic Against AI Face Swaps

Recent events in South Korea surrounding the “N Room 2.0” scandal have once again brought the topic of deepfakes to the forefront. Perpetrators have been gathering on Telegram, using artificial intelligence (AI) to transform women’s photos into explicit images, highlighting a concerning expansion of deepfake technology beyond celebrities and public figures to everyday individuals.

In this era where AI is becoming increasingly prevalent, many are eager to understand how this advanced yet ever-popular technology affects our daily lives. Questions are arising: How sophisticated has deepfake technology become? What dangers does it pose? And how can individuals protect themselves against such fabrications?

To delve into these issues, Dr. Chen Peng, a scientist at Rayleigh Intelligence, shared his insights. The company, founded in 2018 and incubated by Tsinghua University’s AI Research Institute, has years of experience in AI verification.

Dr. Chen notes that the average person struggles to identify deepfakes, emphasizing that counteracting deepfake technology relies heavily on AI advancements. A single image can be sufficient to create a deepfake, and with more data—detailed facial attributes like moles and features—the results can become increasingly realistic.

For example, a recent project by two German artists showcases the alarming simplicity of creating deepfakes. They developed an AI camera named NUCA that employs a 3D-printed body and a 37mm wide-angle lens. Within ten seconds, the camera can generate an AI-crafted nude image of the user without ever seeing their body, merely by analyzing their gender, facial features, age, and body type.

The “N Room 2.0” scandal itself involved a Telegram group with 227,000 members where bots were used to create deepfake images and adjust features like breast size in mere seconds. This rascality encompasses only one facet of deepfake technology, which encompasses a wider range of applications, including the generation of realistic text, images, audio, and video.

Audio deepfakes, in particular, have emerged as a notable concern. Earlier in 2023, tech journalist Joseph Cox successfully fooled a bank into approving a transaction by bypassing voice verification with an AI-generated version of his voice. Dr. Chen explained that capturing voice samples now takes mere seconds, as sophisticated technology can clone voices efficiently.

Text generated by AI poses another significant risk, particularly in the realm of misinformation and fraud. With the ability to produce countless fabricated statements on various events swiftly, this technology automates the dissemination of false narratives on social media.

Dr. Chen cites three main factors for the increased prevalence and ease of deepfakes: breakthroughs in generative AI technologies, the widespread availability of computational power (with consumer-grade graphics cards now capable of running deepfake models), and the development of user-friendly tools that simplify the creation of deepfakes.

The widespread access to deepfake technology raises concerns, as even laypeople can now easily create deceptive content. With this accessibility, ordinary individuals are especially vulnerable to scams. Earlier this year, a multinational company lost $25 million due to a sophisticated scam involving an AI deepfake during a video conference that fooled employees into believing they were conversing with legitimate executives.

To protect themselves, Dr. Chen suggests some practical steps for the average person. When engaged in video calls, one could initiate unusual actions, like swift hand movements or exaggerated head turns, to challenge potential AI facades, as these models might falter under such unique conditions.

Dr. Chen humorously acknowledges that if scammers invest time in creating elaborate models of their targets, even these precautions might not suffice. Consequently, he emphasizes maintaining a high level of vigilance: don’t click on unfamiliar links, avoid sharing verification codes, and be cautious of disclosing personal biometric information like photos, voices, or fingerprints online.

The issue of deepfake technology faces another dangerous dimension—revenge porn. Victims of deepfake pornography may endure harassment and extortion, being threatened with exposure of fabricated explicit material unless they comply with demands. In such instances, AI verification could potentially exonerate victims.

Dr. Chen highlights two main technological approaches to counteracting deepfakes: active defense and passive detection. For active defense, individuals can embed non-perceptible noise into their images when sharing them online. This imperceptible interference could compromise any generated models trained on that data.

Passive detection involves using AI-based platforms to analyze and verify content already in circulation. Rayleigh Intelligence offers several AI-driven products designed to detect deepfake content, enabling quick responses to potential misuse.

Ultimately, this ongoing cat-and-mouse game between deepfake generation and detection underscores the need for a long-term commitment to developing protective techniques against this evolving threat. Although technology cannot guarantee 100% protection, increased awareness can initiate proactive measures to recognize and avoid potential scams.

As we move forward in a technologically advanced world, it’s crucial to build a culture of vigilance where individuals are encouraged to heed their instincts and remain cautious of sharing personal information. Collaboration between legal frameworks, media, and the tech industry can contribute to a safer online environment, allowing society to harness technology responsibly rather than fearfully.

Exit mobile version