DYSTOPIAN FACIAL RECOGNITION SOFTWARE PLAN
The US Internal Revenue Service (IRS) announced last week that it will “transition away” from using third-party facial recognition software provider, ID.me, to verify taxpayers’ identities online.
This move occurred in the wake of a bipartisan block by US senators, who objected to their tax department’s outsourcing of private data.
US Senate Finance Committee chair, Rep. Ron Wyden (D-OR) said: “Americans should not have to sacrifice their privacy for security”. Though ID.me wouldn’t use personal data for “unapproved or unauthorized purposes”, it’s still likely this information would be a target for cyberattacks.
It’s uncertain what will happen to the already collected data of 7 million Americans from 30 state and 10 federal agencies. Ditto the $86 million USD contract ID.me was awarded by the US government last June.
ID.me announced Wednesday that starting March 1st, users can delete the selfies they uploaded as verification over the past few weeks.
HOW COMMON ARE DATA BREACHES?
Though it’s difficult to accurately gauge the exact amount of data that is hacked, leaked or otherwise stolen (due to a lack of accurate reporting), US non-profit IDTRC (Identity Theft Resource Center) reported an increase in data breaches last year relative to 2020.
Estimates show that private data from hundreds of millions of Americans has been swindled from US tax records in recent years. According to this 2020 US Treasury Department report, “much of the information the IRS uses to provide assurance of the taxpayers’ identities may have been stolen”.
And that’s from a government entity – in this case, data is held by a third party away from government control. Chair of the House Oversight Committee, Rep. Carolyn B. Maloney (D-N.Y.) provides that this “increases the potential for exposure due to bad actors and other cybersecurity incidents”.
IS YOUR DATA SAFE FROM AI?
So ‘conscious’ artificial intelligence isn’t yet a reality (despite recent claims by OpenAI’s chief scientist, Ilya Sutskever), but it’s still important to acknowledge the current potential of hyper-intelligent code.
Facial recognition security has been a big topic of debate amongst data privacy spokespeople in recent months. There’s ongoing deliberation of an ‘AI Act’ by the European Union and Meta recently deleted facial recognition software data from more than a billion Facebook users – heck, even Mark Zuckerberg covers his webcam with tape.
We’ve already seen criminals successfully use AI to impersonate voices, so it’s only a matter of time before visual identity theft is attempted. Deepfake technology is only becoming more sophisticated and with scalable quantum computers set to exponentially boost computing power, secure encryption is more important than ever.
We’re in the early stages of a digital identity revolution, and missteps like this ID.me debacle will only be a piece of the larger puzzle as new technologies are developed by verification services and hackers alike.