Fifty years ago, the original Fair Information Practice Principles were enshrined as one of the first major markers along the road that would become the data privacy profession. Among those FIPPs were seeds of ideas that would grow into the foundational privacy principles we still return to today. Ideas like individual autonomy, purpose specification and data minimization are rooted in these originally formulated “safeguards against the potential adverse effects of automated personal-data systems.”
Thank you for reading this post, don’t forget to subscribe!
The concerns that were salient in 1973, as bureaucrats pondered the risks of automated record-keeping systems, are echoed in 2023, as policymakers grapple with emerging risks of increasingly advanced artificial intelligence systems.
It is good we have fifty years of lessons to learn from.
With twin enforcement actions against Amazon this week, the U.S. Federal Trade Commission reminds us not to unlearn these lessons. If individual autonomy, purpose specification and retention limits are at the core of our approach to privacy —as they should be in both law and practice — we must keep them at the center of our practice, even as we build powerful new technologies. This is especially true when dealing with relatively sensitive or intimate data, such as, say, video recordings in our homes or voice recordings of our children.
To that point, Commissioner Alvaro Bedoya released a concurring statement this week, signed by both of his peers, with a timely warning:
“Machine learning is no excuse to break the law. Claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data. The data you use to improve your algorithms must be lawfully collected and lawfully retained. Companies would do well to heed this lesson.”
The enforcement action in question involves Amazon’s Alexa voice assistant service and smart speaker product line. Although affirmative retention limits are rare in U.S. consumer privacy law, the FTC alleges that Amazon violated the Children’s Online Privacy Protection Act Rule “by retaining children’s personal information longer than reasonably necessary to fulfill the purpose for which the information was collected.” As one example of when affirmative deletion is appropriate, the proposed FTC order would require Amazon to delete personal information from inactive child profiles within 90 days.
The Alexa settlement, which imposes a monetary penalty of USD25 million due to the alleged COPPA violations, also results from general deception and unfairness charges related to Amazon’s supposed failure to fully delete voice recordings, transcripts of voice recordings and geolocation data, even after users requested deletion.
The second settlement of the week involves Ring, the security camera service Amazon acquired in 2018. Although it deals with in-home video footage rather than audio files, the Ring matter includes a number of parallels to the Alexa matter. The FTC alleges Ring violated the privacy and security of its customers by:
- Failing to properly limit employee access to recorded videos. According to the FTC, Ring allowed employees and contractors to view live and stored videos from customers’ cameras with impunity and without user consent or knowledge.
- Failing to properly disclose the use of in-home videos for product improvement. The FTC claims Ring used videos from customers’ cameras to develop new features and test its facial recognition technology, without adequately informing customers or obtaining their consent.
- Failing to implement reasonable privacy and security practices. The FTC accuses Ring of not taking sufficient measures to protect customers’ videos from unauthorized access, such as by providing employee training, encrypting videos at rest and guarding against common data security attack vectors.
It is worth noting these alleged misdeeds were stopped after Ring made itself attractive for acquisition, or immediately after Amazon acquired the company, with the notable exception of the data security issues, some of which allegedly continued until recently. As part of the settlement, Amazon’s Ring has agreed to pay USD5.8 million in consumer refunds.
But much more impactful are the algorithm disgorgement terms in the settlement. The company will be required to delete any customer videos and face embeddings obtained prior to March 2018, as well as “any work products it derived from these videos, including any models or algorithms identified or reasonably identifiable by the Defendant as having been developed in whole or in part from review and annotation” of the files. If the deletion of these models is “technically infeasible,” the company must provide a sworn statement by its principal executive officer “certifying that such deletion or destruction is technically infeasible and providing a reasonable explanation for that determination.”
As many have predicted, algorithm disgorgement is becoming an increasingly common tool in privacy enforcement. To avoid the costly implications of such orders, privacy professionals would be well advised to check early and often on the use of personal information to train or improve algorithms. Unless personal data is collected in keeping with legal requirements and best practices, taking into account limitations on the purposes for which it can be used, it should not be incorporated into machine learning models.
Leave a Reply