AI Face Models - New Recruitment Scams Uncovered

Basically, scammers are hiring people to pretend to be real in fake video calls to trick others.
Scammers are recruiting AI models for fraudulent video calls. This alarming trend exploits young women, leading to scams and potential human trafficking. Awareness is key to prevention.
What Happened
In a disturbing trend, scammers are recruiting individuals as AI face models to conduct fraudulent video calls. These models, primarily young women, are often lured by promises of high salaries and flexible work conditions. Instead, they find themselves part of elaborate scams, known as pig-butchering, which target victims primarily in the U.S. A recent investigation by WIRED revealed numerous job postings on Telegram, advertising positions for these AI models in Southeast Asia, particularly Cambodia.
These job ads typically require applicants to make up to 100 video calls per day, using deepfake technology to manipulate potential victims. The recruitment process often includes sharing personal details and even sending videos of themselves as part of the application. Many of these applicants are unaware of the true nature of the work they are signing up for, believing they are entering legitimate job opportunities.
Who's Being Targeted
The primary targets of these scams are unsuspecting individuals in the United States and other Western countries. Scammers use attractive personas, often created through stolen images or deepfake technology, to lure victims into conversations on social media platforms. Once a relationship is established, these scammers attempt to extract money from their victims under various pretenses, including fake investment opportunities in cryptocurrency or romance scams.
The recruitment of AI models has become a significant aspect of these operations. As cybercriminals increasingly adopt advanced technologies, they are able to create more convincing interactions, making it harder for victims to discern the truth. This trend raises concerns about the growing sophistication of online scams and the exploitation of vulnerable individuals in the process.
What Data Was Exposed
While there is no traditional data breach in this scenario, the implications of these scams are severe. Victims may lose significant amounts of money, and the personal data of those applying for these modeling jobs can be misused. The ads often contain red flags, such as vague job descriptions and high salary promises, indicating that they are part of a larger scam operation. The use of terms like 'clients' instead of 'victims' further highlights the deceptive nature of these operations.
Moreover, the conditions under which these models work can be dire. Many are reported to have their passports confiscated, limiting their freedom and trapping them in exploitative situations. This highlights the intersection of human trafficking and cybercrime, where individuals are both victims and unwitting participants in a fraudulent system.
What You Should Do
If you encounter job postings that seem too good to be true, especially those involving video calls or AI modeling, exercise caution. Here are some steps to take:
- Research the company: Look for reviews or reports about the organization behind the job listing.
- Be wary of high salaries: If a job promises unusually high pay for minimal work, it could be a scam.
- Protect your personal information: Never share sensitive personal details without verifying the legitimacy of the job.
- Report suspicious ads: If you come across job postings that seem fraudulent, report them to the platform and relevant authorities.
By staying informed and vigilant, you can help protect yourself and others from falling victim to these sophisticated scams.
Wired Security