top of page

Will AI Increase Your Risk of Identity Theft?


Will AI Increase Your Risk of Identity Theft?

I read an interesting article a few weeks ago titled The Potential for AI-Powered Identity Theft: How Your Digital Footprint Could Be Used Against You by Raj Pathak, a Cloud Security Engineer at Google.


In this article, Pathak states that “cybercriminals can use AI algorithms to analyze your online behavior and create fake identities, impersonate people, and commit fraud.”

Ever since OpenAI released its artificial intelligence (AI) platform called ChatGPT in November 2022, additional artificial AI models including Google Bard and Bing AI have become publicly available.


While OpenAI’s intention is to promote and develop a “friendly AI” where artificial general intelligence (AGI) will have a positive effect on humanity, cyber criminals are already exploiting generative AI by designing different types of malware to hijack login data, social media accounts, and bank accounts.


This May 2, 2023 Security Intelligence article titled ChatGPT Confirms Data Breach, Raising Security Concerns reports OpenAI confirmed that ChatGPT experienced a data breach through a vulnerability that allowed users to see the chat history of other active users.


However, it gets worse “as the researchers from OpenAI discovered this same vulnerability was likely responsible for visibility into payment information for a few hours before ChatGPT was taken offline.”


According to Open AI, “it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date.”


And then in a May 4, 2023 Help Net Security article titled ChatGPT and other AI-themed lures used to deliver malicious software, Check Point (one of the leading IT security companies in the world) reported that “since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious.”


So, let’s return to my original question: Will AI increase your risk of identity theft?

In my view the answer is yes. Think about it. Identity theft has evolved from a manual, labor driven event to today’s more sophisticated digital fraud event. A lot of these digital fraud events have been driven by the never-ending list of data breach incidents from some of the most well-known, technology driven companies.


In addition, the individual consumer needs to look in the mirror on how their addiction to apps and social media open themselves up to identity theft and privacy risks.


For years, cyber thieves and ID theft criminals have leveraged data breaches, social networks, and apps to do their dirty work. With all the positive benefits of AI helping organizations and consumers make faster and more informed decisions, the same organizations and consumers need to be aware of AI related malware, fake domains, deepfakes, and synthetic identity theft.


To reduce your risk of identity theft – especially AI related identity theft – it's important to know about digital security and the many resources available to protect your online identity such as strong passwords and password managers, two-factor authentication, virtual private networks, credit bureau monitoring, and dark web monitoring.

Comments


bottom of page