FraudHIGH

AI Face Models - New Recruitment Scams Uncovered

Featured image for AI Face Models - New Recruitment Scams Uncovered
🎯

Basically, scammers are hiring people to pretend to be real in fake video calls to trick others.

Quick Summary

Scammers are recruiting AI models for fraudulent video calls. This alarming trend exploits young women, leading to scams and potential human trafficking. Awareness is key to prevention.

What Happened

In a disturbing trend, scammers are recruiting individuals as AI face models to conduct fraudulent video calls. These models, primarily young women, are often lured by promises of high salaries and flexible work conditions. Instead, they find themselves part of elaborate scams, known as pig-butchering, which target victims primarily in the U.S. A recent investigation by WIRED revealed numerous job postings on Telegram, advertising positions for these AI models in Southeast Asia, particularly Cambodia.

These job ads typically require applicants to make up to 100 video calls per day, using deepfake technology to manipulate potential victims. The recruitment process often includes sharing personal details and even sending videos of themselves as part of the application. Many of these applicants are unaware of the true nature of the work they are signing up for, believing they are entering legitimate job opportunities.

Who's Being Targeted

The primary targets of these scams are unsuspecting individuals in the United States and other Western countries. Scammers use attractive personas, often created through stolen images or deepfake technology, to lure victims into conversations on social media platforms. Once a relationship is established, these scammers attempt to extract money from their victims under various pretenses, including fake investment opportunities in cryptocurrency or romance scams.

The recruitment of AI models has become a significant aspect of these operations. As cybercriminals increasingly adopt advanced technologies, they are able to create more convincing interactions, making it harder for victims to discern the truth. This trend raises concerns about the growing sophistication of online scams and the exploitation of vulnerable individuals in the process.

What Data Was Exposed

While there is no traditional data breach in this scenario, the implications of these scams are severe. Victims may lose significant amounts of money, and the personal data of those applying for these modeling jobs can be misused. The ads often contain red flags, such as vague job descriptions and high salary promises, indicating that they are part of a larger scam operation. The use of terms like 'clients' instead of 'victims' further highlights the deceptive nature of these operations.

Moreover, the conditions under which these models work can be dire. Many are reported to have their passports confiscated, limiting their freedom and trapping them in exploitative situations. This highlights the intersection of human trafficking and cybercrime, where individuals are both victims and unwitting participants in a fraudulent system.

What You Should Do

If you encounter job postings that seem too good to be true, especially those involving video calls or AI modeling, exercise caution. Here are some steps to take:

  • Research the company: Look for reviews or reports about the organization behind the job listing.
  • Be wary of high salaries: If a job promises unusually high pay for minimal work, it could be a scam.
  • Protect your personal information: Never share sensitive personal details without verifying the legitimacy of the job.
  • Report suspicious ads: If you come across job postings that seem fraudulent, report them to the platform and relevant authorities.

By staying informed and vigilant, you can help protect yourself and others from falling victim to these sophisticated scams.

🔒 Pro insight: The rise of AI models in scams reflects a troubling evolution in cybercrime tactics, leveraging technology to enhance deception and victim manipulation.

Original article from

Wired Security · Matt Burgess

Read Full Article

Related Pings

HIGHFraud

Fraud Prevention - Fingerprint Launches AI-Powered Insights

Fingerprint has launched its MCP Server, revolutionizing fraud prevention with real-time AI insights. This tool connects AI assistants to device intelligence, enhancing fraud analysis efficiency. With 99% of companies facing AI-enabled fraud losses, this innovation is crucial for timely responses.

Help Net Security·
HIGHFraud

Investment Scams - Fake Scandal Clips on Facebook Exposed

Bitdefender has uncovered a series of investment scams on Facebook using fake news and celebrity impersonation. Over 26,000 ads targeted victims worldwide, raising significant concerns about online safety. Meta is taking steps to combat these fraudulent activities, but users must stay alert.

Help Net Security·
HIGHFraud

SocksEscort Botnet Taken Down in Major Fraud Operation

A global operation has taken down the SocksEscort botnet, which compromised thousands of routers for fraud. Victims included individuals and businesses, with millions lost. Authorities seized domains and servers, freezing millions in cryptocurrency.

SC Media·
MEDIUMFraud

Fake Shipment Tracking Scams Surge in MEA Region

Fake shipment tracking scams are on the rise in the MEA region, targeting online shoppers and small businesses. Scammers create urgency to trick victims into providing personal information. Stay vigilant and verify sources to protect yourself.

Group-IB Blog·
HIGHFraud

Beware of Fake Malwarebytes Renewal Notices in Your Calendar

Scammers are sending fake renewal notices from Malwarebytes in calendar invites. Victims may be tricked into calling fake billing numbers, risking their financial information. Stay alert and verify any suspicious invites.

Malwarebytes Labs·
HIGHFraud

AI vs. Phishing: Can It Protect Your Smartphone?

Phishing attacks are becoming more sophisticated, targeting smartphone users. New research shows that AI might help combat these threats. Stay vigilant to protect your personal information and finances.

Dark Reading·