My father’s life hung in the balance when he fell victim to a dangerously common internet scam. As a lifelong Type-2 diabetic, he relies on a regimen of medications. Yet, online ads offering elusive cures enticed him with bold claims like “What your doctor isn’t telling you – independent research exposes Big Pharma!”
Driven by a hope for a cure, he made a grave mistake: discontinuing vital medications. Fortunately, his doctor and I were able to intervene before the worst could happen, but this is an all-too-common narrative of vulnerable loved ones falling prey to manipulation.
We often mistakenly assume that seniors are the only victims of scams, but all of us are potential targets. That’s especially true in a new world where scams can be powered by the exponential growth of artificial intelligence (AI). Even the most savvy among us are increasingly vulnerable to tricky maneuvers, scams, and manipulation.
Unsurprisingly, as technology evolves so do the exploitative tactics employed by scammers. One of the latest trends in a scammer’s repertoire is voice-cloning. Picture yourself receiving a desperate call from a loved one’s phone number, their voice trembling as they plead for your help. Every inflection and tone perfectly mimics them, making it difficult to hang up. The panic and desperation that you hear on the other end of the line causes you to respond as any caring person would – to help. But it could be to your peril. Earlier this year, a man used AI-generated voices to deceive at least eight victims out of hundreds of thousands of dollars in just three days. It is a chilling wake-up call for all of us.
AI deception is also infiltrating our most intimate spheres. Enter CupidBot, an AI-powered tool aimed at men who use dating apps. The company promises to help you “skip to the good part” by using their AI on mainstream dating apps to select potential matches, engage in conversations for you, and even orchestrate dates – without the other person knowing they’re actually interacting with AI.
CupidBot blurs the line between human interaction and automated deception. Despite the possible dangers and risks of such manipulation, the creation of apps like this one – and more broadly AI posing as humans without clear disclosure — is currently legal.
AI’s influence has also been amplified on the national stage, as evidenced by the White House’s recent condemnation of the alarming surge in AI deep fakes. A fake image showing the Pentagon being bombed went viral and circulated across social media so quickly that it momentarily triggered a nearly $500 billion loss in the stock market. The potential for AI-generated misinformation to impact society and government is an uncharted territory for all of us. These instances are mere glimpses into the wide array of dangers facilitated by AI, ranging from manipulative tactics to significant national security risks.
While regulation is necessary to safeguard consumers, people are vulnerable to these practices right now. So how do we protect ourselves?
The first step is awareness. We must remain vigilant to the telltale signs of manipulation and deceit. Are you being pressured and told to act quickly? Are you being asked to use unusual payment methods? Are you being told to divulge private, sensitive information? By being mindful of these red flags, we fortify ourselves against the most common scams.
Second, before giving any personal information, sending money, or sharing an inflammatory post, exercise a few acts of caution and due diligence. Establish a word or phrase with family before sharing private information. Contact the person or organization separately to verify the legitimacy of the request. Double check images being shared online with a trusted media source. Taking extra steps can mean the difference between becoming a victim and maintaining financial security.
Finally, the burden can’t be on consumers alone – we need standards and guidelines for ethical AI and, yes, regulation that is enforceable. We must elevate these concerns to tech experts across industries and lawmakers in the halls of government. Regulating the way AI can be used to deceive consumers should be a long-term goal for all of us, especially when we see the rise of nefarious schemes across the country.
There are a number of steps technology and government leaders can take to help, including strengthening privacy protections, ensuring more transparency from AI powered tools, and holding companies more responsible for the misconduct done on their platforms. But change won’t happen unless we demand it.
AI holds tremendous potential for advancements in various fields, from education to scientific research. We cannot retreat from the positive impacts, but must proactively address the harms and ensure robust safeguards to protect individuals and society. Together, we should strive for a future of progress, taking steps to harness the benefits of AI, while protecting ourselves and creating a safer world.
Marta Tellado is the CEO of Consumer Reports. Marta is one of 1.6% of Latina CEOs in the U.S. and is a tireless advocate for consumer rights, especially for marginalized communities.