Banc of California Wildfire Relief & Recovery Fund
Join us in supporting the ongoing relief and rebuilding efforts.
The Growing Threat of AI Fraud
Generative AI is transforming the way we communicate and work, making waves across industries like cybersecurity, manufacturing and health care.
But, just like any other innovation, GenAI comes with challenges, including the potential for AI-driven fraud and synthetic media. With access to AI tools, malicious actors are developing more sophisticated phishing and impersonation tactics.
Whether you’re an individual consumer or represent a business, being aware of such emerging threats and equipping yourself with the right tools and strategies can help you effectively mitigate the risks and keep your digital identity protected.
This guide offers the knowledge and resources to help you handle the threat of AI fraud.
Understanding Generative AI Fraud
Generative AI scams are a growing concern for businesses and individuals in a digitally driven society. This umbrella term represents several deceptive practices that leverage the evolving capabilities of artificial intelligence to manipulate or create malicious digital content.
Popular Generative AI Scams
- Voice AI scams: Bad actors may use voice modulation AI tools to impersonate trusted people like family members, officials or influencers. This may compromise personal or business security if the impersonation is convincing enough.
- Deepfake technologies: Hyper-realistic videos and audio recordings pose risks to personal identity security and corporate integrity. Fortunately, video deepfakes seem less common so far. One study found that 29% of businesses had experienced such fraud attempts. However, deepfakes can still be used in elaborate social engineering schemes to trick people into revealing sensitive information or unknowingly making unauthorized transactions.
Challenges and Threats
Scams involving generative AI are becoming more powerful for several reasons:
- Impersonation: Scammers might impersonate a company’s CEO or even relatives to trick people into doing things like authorizing fraudulent money transfers, thus exploiting people’s trust.
- Increasing sophistication: AI technologies are constantly improving, making these scams increasingly challenging to detect. As imperfections in AI content disappear or become more subtle, it becomes more difficult to spot when something is fake.
- Manipulation of public perception: Bad actors could misuse AI-generated content to spread false narratives or fake news — like doctored videos of executives or politicians. This type of propaganda can manipulate public opinion, influence elections, and stir up social unrest.
How LLMs Are Impacting Phishing and BEC Scams
New tools such as ChatGPT, which use large language models (LLMs), are making phishing and business email compromise (BEC) scams trickier to spot.
A BEC scam, also known as email account compromise (EAC), is a tactic in which tricksters send emails that look like they’re from someone you know and trust, such as your CEO or business partner. The goal of these criminals is to seem so familiar and routine that you act without thinking twice.
Here’s how LLM tools allow scammers to create convincing fake emails:
- Smart email writing: With the right prompts, ChatGPT can produce email copies that are much more engaging. A study shows that people click on these fake emails 30-44% of the time.
- Language tricks: Generative AI tools can quickly translate scam emails into different languages. This means scammers can now target more people across the globe, even in places that haven’t seen such scams before.
- Personalized targeting: Giving more context about potential victims to LLM-based AI tools can help generate highly personalized emails, increasing the likelihood of successful scams.
How to Spot AI Fraud
AI-powered scams can be spotted if you know what to look for. The key is staying vigilant and knowing some best practices for generative AI fraud detection.
Here are some telltale signs of AI-generated scams:
- Unusual email requests: Be wary of emails from your trusted contacts that seem a bit “off” — for example, urgently asking you to transfer money. This could be a sign of AI phishing, a common AI email security threat.
- Facial irregularities: Deepfakes often struggle to perfectly sync facial expressions and speech. Be on the lookout for any unnatural facial movement or mismatches between the lips and the words being said.
- Audio inconsistencies: While AI-generated audio is getting harder to detect, you can look for unnatural cadences or tones that don’t quite match the person’s normal voice.
- Video background anomalies: Videos generated through deepfake technologies usually mask imperfections by blurring the background or specific areas around the edges or where the face meets the neck and hair.
- Unnatural hand and finger positions: AI-generated images typically fail to generate complex movements and details of human hands. Check for awkward hand or finger positioning that could be straighter or better defined. In many cases, the fingers may even merge.
It’s always good to remain skeptical and use your best judgment when exposed to such emails and media. If something seems off, it’s worth double-checking before taking action.
Steps You Can Take to Stay Protected
Here are some ways you can mitigate generative AI scams:
1. Educate Yourself
One of the easiest ways to spot potential AI threats is by understanding the mechanisms and patterns behind scams and the technology that powers them. You can learn about common AI fraud types and stay informed on AI advancements from trusted sources and educational forums like:
- Federal Trade Commission (FTC)
- Federal Communications Commission (FCC) Consumer Guides
- Center for AI Safety
2. Use Current Verification Protocols
To protect against generative AI dangers, it’s also important to have a strong verification process in place. These extra security measures can help confirm your identity and transactions, creating multiple layers of protection. Here are some examples:
- Multi-factor authentication (MFA): This adds an extra step beyond a password by sending a code to your phone to verify if it’s really you. Also, be sure not to share this code with anyone else.
- Biometric and behavioral biometric verification: Things like facial recognition, fingerprints or even the way you type or move your mouse can help confirm your identity.
- Hardware security keys: These are physical devices you can use to log in securely, adding another layer of protection beyond your password.
3. Combat AI With Trusted Methods
Use the S-I-F-T technique, defined by digital literacy expert Mike Caulfield: Stop to analyze, Investigate the source, Find better coverage and Trace back to the original context. This media literacy strategy equips you to evaluate potential misinformation and AI-generated media.
What to Do if You Suspect AI Fraud
Digital scammers can indeed target individuals on any device and any platform. However, the good news is that they can’t access your personal information directly — your reaction is the key. So, if you suspect an AI-powered scam, it’s essential to recognize and act quickly.
Here’s what to do:
- Confirm your suspicion: Before reacting to any sensitive message, confirm whether the interaction is genuine. Contact the person directly to verify.
- Stop all communications: As soon as you confirm it’s a scam, stop all further interactions. Do not respond to or forward suspicious emails, messages or calls.
- Contact your relationship manager: Notify your relationship manager at your bank or financial institution to get immediate support and implement protective measures.
- Secure your accounts: If you’ve interacted with a fraudulent message or clicked links, change your passwords and review account settings. Make sure to enable security features like two-factor authentication.
Essential Contacts
- FBI Internet Crime Complaint Center: For crimes involving cyber fraud, especially those crossing state or international borders, file a report at ic3.gov or via its CyWatch 24/7 Operation at 855-292-3937.
- Banc of California Client Support: Contact your relationship manager immediately if the fraudulent activity involves your account or any Banc of California services. They will provide immediate support and initiate protective measures to safeguard your assets and personal information. For any concerns or to report suspicious activities, call 877-770-BANC (2262) or email ClientCareCenter@bancofcal.com.
Staying Ahead of Generative AI Fraud
The challenges posed by generative AI fraud are only a part of the natural progression that comes with adopting new technologies. These technologies offer a wide range of benefits, and the risks of generative AI can be manageable if you stay informed.
Explore more cybersecurity and fraud prevention articles on our Business Insights page for detailed tips on recognizing and avoiding common scams.
CONNECT WITH A RELATIONSHIP MANAGER
COMPLETE THIS FORM OR CALL
877-770-BANC (2262)