Pharmacy chain Rite Aid is getting a timeout from AI facial recognition surveillance tech thanks to federal regulators.
The U.S. Federal Trade Commission today announced a settlement with Rite Aid stating the chain recklessly deployed AI biometric surveillance on customers without safeguards - and slapped a 5-year ban on the controversial practice as a result.
The FTC alleges that from 2012 until 2020, Rite Aid scanned the faces of shoppers across hundreds of locations in an attempt to suspected thieves and individuals suspected of other unlawful activity.
The agency says the technology was rolled out without properly testing it for accuracy or preventing potential privacy harms.
Facial recognition technology has advanced rapidly in recent years, enabling a range of commercial and law enforcement applications but also raising significant privacy concerns.
Retailers have looked to implementations like Rite Aid's to help curb theft, but critics argue such uses often proceed without sufficient consent, transparency or accuracy testing.
The FTC has increasingly scrutinized the biometrics sector, particularly as retailers and others deploy tools to identify individuals in public spaces without clarifying what data may be collected or how it could be used and shared.
This marks one of the strongest actions to date against a company's biometric practices, reflecting growing recognition that emerging surveillance capabilities demand careful oversight.
According to the agency, Rite Aid contracted with vendors to build a database containing photos and personal information of individuals suspected of past criminal activity at stores.
Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began.
The FTC says tens of thousands of low quality images, obtained from sources like security footage and social media, were incorporated into the system.
The technology produced thousands of false positives, sometimes identifying customers as suspects from thousands of miles away or flagging the same person at multiple locations.
The complaint says Rite Aid did not consistently ensure staff followed a policy allowing them to flag problematic identification results.
The FTC states Rite Aid failed to properly test the technology for accuracy before rolling it out or establish procedures to assess match error rates over time.
Employees guiding the program in stores also did not receive sufficient training on the technology's limitations.
According to the complaint, racialized people faced disproportionate rates of mistaken identification by Rite Aid's systems in some minority-majority communities.
In addition to halting any biometric monitoring for half a decade across physical and online Rite Aid outlets, the proposed settlement demands new oversight if such capabilities are reconsidered down the line.
The FTC further contends Rite Aid flouted a 2010 mandate by failing to ensure vendors entrusted with customer information employed adequate security best practices.
As part of remediating past compliance failures, new stipulations institute mandatory security precautions like multi-layered authentication, continuous employee training and provide the FTC with annual certification of adherence to the order.
Rite Aid insists safety remains top priority while navigating ongoing bankruptcy proceedings triggered by Wall Street's squeeze on the ailing chain amid industry turmoil.
This Cyber News was published on venturebeat.com. Publication date: Tue, 19 Dec 2023 23:43:05 +0000