Data Privacy Archives - Compliance Chief 360 https://compliancechief360.com/tag/data-privacy/ The independent knowledge source for Compliance Officers Thu, 31 Oct 2024 19:52:13 +0000 en-US hourly 1 https://compliancechief360.com/wp-content/uploads/2021/06/cropped-Compliance-chief-logo-square-only-2021-32x32.png Data Privacy Archives - Compliance Chief 360 https://compliancechief360.com/tag/data-privacy/ 32 32 FTC Investigation Triggers Lawsuit Against TikTok for Children’s Privacy Violations https://compliancechief360.com/ftc-investigation-triggers-lawsuit-against-tiktok-for-childrens-privacy-violations/ https://compliancechief360.com/ftc-investigation-triggers-lawsuit-against-tiktok-for-childrens-privacy-violations/#respond Fri, 09 Aug 2024 13:54:14 +0000 https://compliancechief360.com/?p=3622 As a result of the Federal Trade Commission’s investigation, the Department of Justice sued TikTok and its parent company ByteDance with flagrantly violating a children’s privacy law—the Children’s Online Privacy Protection Act—and also alleged they infringed an existing FTC 2019 consent order against TikTok for violating COPPA. The complaint alleges that TikTok and ByteDance failed Read More

The post FTC Investigation Triggers Lawsuit Against TikTok for Children’s Privacy Violations appeared first on Compliance Chief 360.

]]>
As a result of the Federal Trade Commission’s investigation, the Department of Justice sued TikTok and its parent company ByteDance with flagrantly violating a children’s privacy law—the Children’s Online Privacy Protection Act—and also alleged they infringed an existing FTC 2019 consent order against TikTok for violating COPPA.

The complaint alleges that TikTok and ByteDance failed to comply with the COPPA requirement to notify and obtain parental consent before collecting and using personal information from children under the age of 13.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina Khan. “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

“The Justice Department is committed to upholding parents’ ability to protect their children’s privacy,” said Principal Deputy Assistant Attorney General Brian Boynton. “This action is necessary to prevent the defendants, who are repeat offenders and operate on a massive scale, from collecting and using young children’s private information without any parental consent or control.”

ByteDance and its related companies allegedly were aware of the need to comply with the COPPA Rule and the 2019 consent order and knew about TikTok’s compliance failures that put children’s data and privacy at risk. Instead of complying, ByteDance and TikTok spent years knowingly allowing millions of children under 13 on their platform designated for users 13 years and older in violation of COPPA, according to the complaint.

As of 2020, TikTok had a policy of maintaining accounts of children that it knew were under 13 unless the child made an explicit admission of age and other rigid conditions were met, according to the complaint. TikTok employees allegedly spent an average of only five to seven seconds reviewing each account to make their determination of whether the account belonged to a child.

The company allegedly continued to collect personal data from these underage users, including data that enabled TikTok to target advertising to them—without notifying their parents and obtaining their consent as required by the COPPA Rule. Even after it reportedly changed its policy not to require an explicit admission of age, TikTok still continued to unlawfully maintain and use personal information of children, according to the complaint.

TikTok’s practices prompted its own employees to raise concerns. As alleged, after failing to delete numerous underage child accounts, one compliance employee noted, “We can get in trouble … because of COPPA.”

TikTok Allowed Children to Bypass the Age Requirement

In addition, the complaint alleges that TikTok built back doors into its platform that allowed children to bypass the age gate aimed at screening children under 13. TikTok allegedly allowed children to create accounts without having to provide their age or obtain parental consent to use TikTok by using credentials from third-party services like Google and Instagram. TikTok classified such accounts as “age unknown” accounts, which grew to millions of accounts, according to the complaint.

TikTok also allegedly made it difficult for parents to request that their child’s accounts be deleted. When parents managed to navigate the multiple steps required to submit a deletion request, TikTok often failed to comply with those requests. TikTok also imposed unnecessary and duplicative hurdles for parents seeking to have their children’s data deleted. That practice allegedly continued even after the executive responsible for child safety issues told TikTok’s then-CEO, “we already have all the info that’s needed” to delete a child’s data when a parent requests it, yet TikTok would not delete it unless the parent fills out a second, duplicative form. If the parent did not do that, the executive allegedly added, “then we have actual knowledge of underage user[s] and took no action!”

Additionally, the complaint alleges that TikTok failed to:

  • Notify parents about all of the personal data they were collecting from children;
  • Obtain parental consent for the collection and use of that data;
  • Limit the collection, use, and disclosure of children’s personal information; and
  • Delete children’s personal information when requested by parents or when it was no longer needed.

The complaint asks the court to impose civil penalties against ByteDance and TikTok and to enter a permanent injunction against them to prevent future violations of COPPA.   end slug

The post FTC Investigation Triggers Lawsuit Against TikTok for Children’s Privacy Violations appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-investigation-triggers-lawsuit-against-tiktok-for-childrens-privacy-violations/feed/ 0
Meta Reaches Historic Settlement Over Biometric Data Violations https://compliancechief360.com/meta-reaches-historic-settlement-over-biometric-data-violations/ https://compliancechief360.com/meta-reaches-historic-settlement-over-biometric-data-violations/#respond Wed, 31 Jul 2024 17:41:19 +0000 https://compliancechief360.com/?p=3605 The social media giant, Meta, agreed to settle a lawsuit accusing the company of illegally capturing biometric data from its users without their consent. Meta will pay a historic amount of $1.4 billion over the course of the next five years. Texas Attorney General Ken Paxton and McKool Smith, which also represents Texas, said that Read More

The post Meta Reaches Historic Settlement Over Biometric Data Violations appeared first on Compliance Chief 360.

]]>
The social media giant, Meta, agreed to settle a lawsuit accusing the company of illegally capturing biometric data from its users without their consent. Meta will pay a historic amount of $1.4 billion over the course of the next five years. Texas Attorney General Ken Paxton and McKool Smith, which also represents Texas, said that the deal is “the largest settlement ever obtained from an action brought by a single state.” 

The lawsuit accused Meta of using its users biometric data that is contained in photos and videos on Facebook without receiving permission to do so. As a result of this activity Facebook exploited the personal information of users and non-users alike to grow its empire and reap historic windfall profits.

“Companies that operate in Texas must be held accountable for their actions, particularly when it puts the privacy of Texans at risk. We’re grateful to have had the opportunity to work with the Office of the Attorney General, and we appreciate how the court handled this lawsuit,” attorneys Sam Baxter and Jennifer Truelove said in a written statement.

Texas Alleged that Meta Violated its Data Privacy Laws

AG Paxton alleged Meta of violating Texas’s Capture or Use of Biometric Identifier Act and the Deceptive Trade Practices Act (CUBI). The claimed violation rose out of Meta’s “Tag Suggestions” feature on Facebook that consisted of an automated photo tagging feature when users upload photos or videos. Facebook introduced the facial recognition technology in 2010 which provided users with an easier way of tagging their friends. In 2021, the company announced that it would cease to use the technology after settling a case in which it was sued for violating Illinois’ ​​biometric privacy law.

“It was the first time the State of Texas sought to enforce its biometric-privacy law since enactment, requiring our team to develop novel litigation approaches and analyze important questions of first impression,” Zina Bash, representative attorney for Texas, said in a written statement. “And it was the first time a single state has ever achieved a settlement of this magnitude — which is even more rewarding because of the record time in which we obtained it. When we filed the case in 2022, we knew the state wanted to move quickly, and our team was relentless in litigating the case.”

In February 2022, Paxton filed a lawsuit in Texas state court against Facebook’s parent company, accusing it of violating the CUBI act by failing to obtain consent from Facebook users before collecting their data. The state also claimed that Meta unlawfully disclosed this data to third parties and failed to delete the data within the time frame specified by CUBI.

A Meta spokesperson said the company was “pleased to resolve this matter and look forward to exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers.”   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360° 

The post Meta Reaches Historic Settlement Over Biometric Data Violations appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/meta-reaches-historic-settlement-over-biometric-data-violations/feed/ 0
AT&T Sued for Failing to Protect Customer Data in Cybersecurity Breach https://compliancechief360.com/att-sued-for-failing-to-protect-customer-data-in-cybersecurity-breach/ https://compliancechief360.com/att-sued-for-failing-to-protect-customer-data-in-cybersecurity-breach/#respond Thu, 18 Jul 2024 20:07:40 +0000 https://compliancechief360.com/?p=3584 After having nearly all of its customers’ records breached, AT&T is facing a class action lawsuit alleging that the cellular company failed to implement adequate cybersecurity procedures and protocols. The class action is taking place in Texas, Montana and New Jersey federal courts. The lawsuit arises out of an incident that took place in May Read More

The post AT&T Sued for Failing to Protect Customer Data in Cybersecurity Breach appeared first on Compliance Chief 360.

]]>
After having nearly all of its customers’ records breached, AT&T is facing a class action lawsuit alleging that the cellular company failed to implement adequate cybersecurity procedures and protocols. The class action is taking place in Texas, Montana and New Jersey federal courts.

The lawsuit arises out of an incident that took place in May 2022 in which hackers downloaded phone call and text message records belonging to “nearly all” the AT&T’s wireless customers. AT&T admitted to the hack and said that the breached data included a record of every AT&T customers’ phone and text logs however, it did not include the content of calls and text messages suchg as social security numbers, dates of birth or customer names.

The lawsuit claims that AT&T was negligent and alleges that the company was not sufficiently transparent about the “nature and extent of data security lapses impacting its customers,” including how the attacks put them in danger of identity fraud. “Plaintiff and other data breach victims provided their [personally identifiable information] to AT&T with the reasonable expectations and mutual understanding that AT&T would comply with its obligations to keep such information confidential and secure from unauthorized access,” the complaint said.

Dina Winger, the plaintiff in the Texas lawsuit emphasized that AT&T should have known the risks within the cellular industry and should have implemented protocols to mitigate such risks. “Because the data breach was an intentional hack by cybercriminals seeking information of value that they could exploit, victims are at imminent risk of severe identity theft and exploitation,” Winger said, adding that AT&T knew or should have known that its systems were targets for cybersecurity attacks.

In the Montana federal court, AT&T was accused of “failing to properly secure and safeguard their personal information, including phone call and text message records for “nearly all” of the company’s 110 million cellular customers.” That lawsuit seeks to collect money from AT&T as compensation in addition to an injunction that requires the company to modify its data security processes and granting the victims credit monitoring and identity theft insurance, as well as attorney fees and litigation costs.

The New Jersey case mainly repeats the Montana and Texas accusations and simply emphasizes that AT&T disregarded its customers’ rights by failing to implement adequate measures to protect their sensitive information. All the plaintiffs aim to represent nationwide classes of data breach victims, potentially getting the class to millions of individuals.

AT&T Explains How the Breach Occurred

According to AT&T, its investigation revealed that a hacker accessed an AT&T workspace on a third-party cloud platform. The hacker then extracted files containing records of customer call and text interactions from approximately May 1 to October 31, 2022. The cellular service company said that it immediately activated its incident response process as well as hired external cybersecurity to help with the issue.

Since then, AT&T has assured its customers that none of their sensitive information has been leaked and that it has now secured its systems in order to discontinue the breach.   end slug

PHOTO BY: BROWNINGS, USED UNDER CC BY-SA 3.0


Jacob Horowitz is a contributing editor at Compliance Chief 360° 

The post AT&T Sued for Failing to Protect Customer Data in Cybersecurity Breach appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/att-sued-for-failing-to-protect-customer-data-in-cybersecurity-breach/feed/ 0
France Fines Amazon $35 Million for Excessive Monitoring of Employees https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/ https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/#respond Thu, 25 Jan 2024 21:36:21 +0000 https://compliancechief360.com/?p=3445 The French Data Protection Authority (FDPA) issued a $35 million fine to Amazon for its excessive surveillance of its employees, including the company’s relentless tracking of employee performance and breaks, as well as the implementation of a video monitoring system without informed employee consent. The Commission Nationale de l’informatique et des Libertes (CNIL), ruled that Read More

The post France Fines Amazon $35 Million for Excessive Monitoring of Employees appeared first on Compliance Chief 360.

]]>
The French Data Protection Authority (FDPA) issued a $35 million fine to Amazon for its excessive surveillance of its employees, including the company’s relentless tracking of employee performance and breaks, as well as the implementation of a video monitoring system without informed employee consent.

The Commission Nationale de l’informatique et des Libertes (CNIL), ruled that Amazon’s system of measuring how quickly its employees scanned items and how long they took breaks was unnecessary and intrusive. The trillion-dollar company had implemented a “Stow Machine Gun Indicator” that required an item to be scanned in no less than 1.25 seconds after the previous one and was immediately alerted when an employee was not keeping up with the required pace.

Amazon also employed an “idle time indicator” and an “latency under ten minutes indicator” which alerted the company when an employee took a break for ten minutes or more and when a scanner was interrupted for up to ten minutes. Because of the large amount of pressure this system placed on Amazon’s employees, CNIL declared the system as extremely excessive, stating that it is “illegal to set up a system measuring work interruptions with such accuracy, potentially requiring employees to justify every break or interruption.”

In its ruling, CNIL examined the three indicators and determined that they led to an excessive monitoring of Amazon’s employees by the company. Specifically, the Commission found that the processing of the Stow Machine Gun Indicator meant that nearly any activity of an employee can be constantly monitored to the nearest second, and errors are common. The use of the other indicators made it possible to constantly monitor any time an employee’s scanner is interrupted even for a small amount of time.

Amazon Charged with Unauthorized Employee Surveillance

CNIL additionally stated that Amazon was holding on to employee surveillance data for an exorbitant amount of time of 31 days. Amazon should not be permitted to collect “every detail of the employee’s quality and productivity indicators collected using the scanners over the last month,” the ruling said. The Commission stated that it would be enough to review the surveillance data on a mere weekly basis.

FDPA also discovered that Amazon engaged in video surveillance of employees without their informed consent. This type of surveillance, without adequate notice, is a violation of the privacy protocols contained within General Data Protection Regulation, the French Authority said.

“We strongly disagree with the CNIL’s conclusions, which are factually incorrect, and we reserve the right to file an appeal,” Amazon said in a statement. “Warehouse management systems are industry standard and are necessary for ensuring the safety, quality and efficiency of operations and to track the storage of inventory and processing of packages on time and in line with customer expectations.”

This is not Amazon’s first time being charged with violations of the General Data Protection Regulation rules. In July 2021, Luxembourg issued the tech and retail giant a record fine of $886 million for violations stemming from its data processing practices.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post France Fines Amazon $35 Million for Excessive Monitoring of Employees appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/feed/ 0
FTC Enacts First-Ever Ban on Selling Sensitive Location Data https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/ https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/#respond Wed, 10 Jan 2024 19:35:28 +0000 https://compliancechief360.com/?p=3413 The Federal Trade Commission (FTC) has prohibited data broker X-Mode Social and its successor Outlogic from sharing or selling sensitive location data as part of a settlement resulting from allegations that the company sold precise location data that could be used to track people’s visits to private locations. In its first settlement with a data Read More

The post FTC Enacts First-Ever Ban on Selling Sensitive Location Data appeared first on Compliance Chief 360.

]]>
The Federal Trade Commission (FTC) has prohibited data broker X-Mode Social and its successor Outlogic from sharing or selling sensitive location data as part of a settlement resulting from allegations that the company sold precise location data that could be used to track people’s visits to private locations.

In its first settlement with a data broker regarding the collection and sale of sensitive location information, the FTC also accused X-Mode Social and Outlogic of failing to put in place reasonable safeguards on the use of such information by third parties. This settlement ultimately represents the FTC’s strong commitment to preventing companies from selling their users’ sensitive location information.

“Geolocation data can reveal not just where a person lives and whom they spend time with but also, for example, which medical treatments they seek and where they worship. The FTC’s action against X-Mode makes clear that businesses do not have free license to market and sell Americans’ sensitive location data,” said FTC Chair Lina Khan. “By securing a first-ever ban on the use and sale of sensitive location data, the FTC is continuing its critical work to protect Americans from intrusive data brokers and unchecked corporate surveillance.”

X-Mode/Outlogic has been selling location information that is tied to each user’s phone. This data isn’t anonymous and can be used to track where a person with a specific phone has been. The company also sells consumer location data to hundreds of clients in industries ranging from real estate to finance, as well as private government contractors for their own purposes, such as advertising.

According to the FTC’s complaint, until May 2023, the company did not have any policies in place to remove sensitive locations from the location data it sold. The FTC says X-Mode/Outlogic did not implement any reasonable safeguards against use of the location data it sells, putting consumers’ sensitive and private information at risk.

The information revealed through the location data that X-Mode/Outlogic sold not only violated consumers’ privacy but also exposed them to “potential discrimination, physical violence, emotional distress, and other harms,” according to the complaint. The FTC also mentioned that the company didn’t make sure users of its apps, like Drunk Mode and Walk additionally Humanity, which used X-Mode/Outlogic’s software, were properly informed about how their location data would be used.

The company also failed to employ the necessary technical safeguards and oversight to ensure that it honored requests by some android users to opt out of tracking and personalized ads, according to the complaint. The FTC says these practices violate the FTC Act’s prohibition against unfair and deceptive practices.

FTC Mandates Preservation of User Location Privacy

In addition to the limits on sharing certain sensitive locations, the proposed order requires X-Mode/Outlogic to create a program to ensure it develops and maintains a comprehensive list of sensitive locations, and ensure it is not sharing, or selling data about such locations. Other provisions of the proposed order require the company to:

  • Delete or destroy all the location data it previously collected, and any products produced from this data unless it obtains consumer consent or ensures the data has been deidentified or rendered non-sensitive;
  • Ensure that companies that provide location data to X-Mode/Outlogic are obtaining informed consent from consumers for the collection, use and sale of the data or stop using such information;
  • Implement procedures to ensure that recipients of its location data do not associate the data with locations that provide services to LGBTQ+ people such as bars or service organizations, with locations of public gatherings of individuals at political or social demonstrations or protests, or use location data to determine the identity or location of a specific individual;
  • Provide a simple and easy-to-find way for consumers to withdraw their consent for the collection and use of their location data and for the deletion of any location data that was previously collected;
  • Provide a clear and conspicuous means for consumers to request the identity of any individuals and businesses to whom their personal data has been sold or shared or give consumers a way to delete their personal location data from the commercial databases of all recipients of the data; and
  • Establish and implement a comprehensive privacy program that protects the privacy of consumers’ personal information and also create a data retention schedule.

The proposed order also limits the company from collecting or using location data when consumers have opted out of targeted advertising or tracking or if the company cannot verify records showing that consumers have provided consent to the collection of location data.   end slug

The post FTC Enacts First-Ever Ban on Selling Sensitive Location Data appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/feed/ 0
Social Media Companies Challenge Law Requiring Parental Consent https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/ https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/#respond Tue, 09 Jan 2024 16:13:35 +0000 https://compliancechief360.com/?p=3404 Starting next week, the state of Ohio will require social media platforms to obtain parental consent before a child under the age of sixteen creates an account on their websites. However, an association by the name of NetChoice is now challenging this law on the belief that it violates the first amendment right of free Read More

The post Social Media Companies Challenge Law Requiring Parental Consent appeared first on Compliance Chief 360.

]]>
Starting next week, the state of Ohio will require social media platforms to obtain parental consent before a child under the age of sixteen creates an account on their websites. However, an association by the name of NetChoice is now challenging this law on the belief that it violates the first amendment right of free speech and free press.

“Ohio has decided that the government—in the first instance—should decide what speech is appropriate for minors on the internet,” NetChoice said. “The act restricts minors’ access to covered websites unless a parent provides ‘verifiable’ consent through a state-mandated means.” The association filed its lawsuit against the Ohio Attorney General Dave Yost.

This law, known as the Parental Notification by Social Media Operators Act (PNSNOA), would require platforms such as TikTok and Instagram to find a way to verify a child’s age, and obtain parental consent before the account can be made. NetChoice argues that, rather than imposing restrictions on users and platforms through required parental consent, a more effective approach would involve offering parents educational resources to enhance their understanding of potential risks associated with their child’s use of social media.

“We at NetChoice believe families equipped with educational resources are capable of determining the best approach to online services and privacy protections for themselves,” Chris Marchese, director of the NetChoice Litigation Center said. “With our lawsuit, we will fight to ensure all Ohioans can embrace digital tools without their privacy, security, and rights being thwarted.”

Under the law, these platforms are required to obtain parental consent by doing at least one of the following:

  • Require a parent or legal guardian to sign and return a form consenting to the child’s use or access.
  • If a payment is necessary, require the parent to use a credit card, debit card or other payment system that provides   n notification for each separate transaction.
  • Require a parent or legal guardian to call a telephone number to confirm the child’s use or access.
  • Require a parent or legal guardian to connect through videoconference to confirm the child’s use or access.
  • Verify a parent’s or legal guardian’s identity by checking their ID.

Parental Support for PNSNOA

This new law has raised much controversy between parents and tech companies. Of course, the companies themselves are opposed to PNSNOA however, some Ohio parents have expressed excitement over the law under the belief that it will safeguard their children from the dangers of social media. “I actually like it,” said Kenyetta Whipple, a mother from Youngstown, Ohio. “It’s scary nowadays because you can talk to a complete stranger with the click of a button, and I have a young daughter.  And I just feel like we’re living in a world where there is just too much free access to our kids. You can access someone’s locations, videos, and pictures, and a lot of children are naïve.”

The inherent purpose of this law to protect children from “harmful content” and potential obsessions. As Attorney General Yost noted in a statement, “in filing this lawsuit, these companies are determined to go around parents to expose children to harmful content and addict them to their platforms. These companies know that they are harming our children with addictive algorithms with catastrophic health and mental health outcomes.”

NetCoice has also sued California and Arkansas for the enactment of laws similar to PNSNOA. The association has prevailed in both cases and neither states were permitted place such restrictions on the social media platforms and its young users.end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post Social Media Companies Challenge Law Requiring Parental Consent appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/feed/ 0
FTC Proposes Significant Changes to Online Protection Rules for Children https://compliancechief360.com/ftc-proposes-significant-changes-to-online-protection-rules-for-children/ https://compliancechief360.com/ftc-proposes-significant-changes-to-online-protection-rules-for-children/#respond Thu, 21 Dec 2023 17:51:12 +0000 https://compliancechief360.com/?p=3384 The Federal Trade Commission has proposed changes to the Children’s Online Privacy Protection Act (COPPA) that would place new restrictions on the use and disclosure of children’s personal information and limit companies from profiting from children’s data. With these proposed changes, the FTC intends for the Act to reflect technological changes and aims to provide Read More

The post FTC Proposes Significant Changes to Online Protection Rules for Children appeared first on Compliance Chief 360.

]]>
The Federal Trade Commission has proposed changes to the Children’s Online Privacy Protection Act (COPPA) that would place new restrictions on the use and disclosure of children’s personal information and limit companies from profiting from children’s data.

With these proposed changes, the FTC intends for the Act to reflect technological changes and aims to provide young children with greater protections for their personal data. The FTC also wants to ensure that parents will retain control regarding their children’s data and that website operators will be held accountable for their failure to maintain the safety and security of digital services for children.

FTC Seeking Comments on Proposed Changes

In a notice of proposed rulemaking, the FTC is seeking comment on proposed changes to the COPPA Rule aimed at addressing the evolving ways personal information is being collected, used, and disclosed, including to monetize children’s data, and clarifying and streamlining the rule.

The COPPA Rule, which first went into effect in 2000, requires certain websites and other online services that collect personal information from children under the age of 13 to provide notice to parents and obtain verifiable parental consent before collecting, using, or disclosing personal information from these children. The current Act places several obligations on the website operator, including:

  • The incorporation of a detailed privacy policy that describes the information collected from its users.
  • Acquisition of a verifiable parental consent prior to collection of personal information from a child under the age of 13.
  • Disclosure to parents of any information collected on their children by the website.
  • A Right to revoke consent and have information deleted.
  • Limited collection of personal information when a child participates in online games and contests.
  • A general requirement to protect the confidentiality, security, and integrity of any personal information that is collected online from children.

“Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” said FTC Chair Lina Khan. “The proposed changes to COPPA are much-needed, especially in an era where online tools are essential for navigating daily life—and where firms are deploying increasingly sophisticated digital tools to surveil children. By requiring firms to better safeguard kids’ data, our proposal places affirmative obligations on service providers and prohibits them from outsourcing their responsibilities to parents.”

The FTC initiated the latest review of the COPPA Rule in 2019 and received more than 175,000 comments on its request for public comment on whether changes were needed to the rule. The agency also held a workshop in October 2019 on whether to update the COPPA Rule in light of evolving business practices in the online children’s marketplace, including the increased use of voice-enabled connected devices, educational technology, and general audience platforms hosting third-party child-directed content.

The FTC last made changes to the COPPA Rule in 2013 to reflect the increasing use of mobile devices and social networking by, among other things, expanding the definition of personal information to include persistent identifiers such as cookies that track a child’s activity online, as well as geolocation information, photos, videos, and audio recordings.

FTC’s Proposed COPPA Amendments

The FTC has proposed several changes to the rule, including:

  • Requiring Separate Opt-In for Targeted Advertising: Building off the existing consent requirement in section 312.5, website and online service operators covered by COPPA would now be required to obtain separate verifiable parental consent to disclose information to third parties including third-party advertisers—unless the disclosure is integral to the nature of the website or online service. Firms cannot condition access to services on disclosure of personal information to third parties.
  • Prohibition against conditioning a child’s participation on collection of personal information: The proposal reinforces the current rule’s prohibition on conditioning participation in an activity on the collection of personal data to make clear that it serves as an outright ban on collecting more personal information than is reasonably necessary for a child to participate in a game, offering of a prize, or another activity. In addition, the FTC is considering adding new language to this section to clarify the meaning of “activity.”
  • Limits on the support for the internal operations exception: The current rule allows operators to collect persistent identifiers without first obtaining verifiable parental consent as long as the operator does not collect any other personal information and uses the persistent identifier solely to provide “support for the internal operations of the website or online service.” The proposed rule changes would require operators utilizing this exception to provide an online notice that states the specific internal operations for which the operator has collected a persistent identifier and how they will ensure that such identifier is not used or disclosed to contact a specific individual, including through targeted advertising.
  • Limits on nudging kids to stay online: Operators would be prohibited from using online contact information and persistent identifiers collected under COPPA’s multiple contact and support for the internal operations exceptions to send push notifications to children to prompt or encourage them to use their service more. Operators that use personal information collected from a child to prompt or encourage use of their service would also be required to flag such usage in their COPPA-required direct and online notices.
  • Changes related to Ed Tech: The FTC has proposed codifying its current guidance related to the use of education technology to prohibit commercial use of children’s information and implement additional safeguards. The proposed rule would allow schools and school districts to authorize ed tech providers to collect, use, and disclose students’ personal information but only for a school-authorized educational purpose and not for any commercial purpose.
  • Increasing accountability for Safe Harbor programs: The proposed rule would increase transparency and accountability of COPPA Safe Harbor programs, including by requiring each program to publicly disclose its membership list and report additional information to the Commission.
  • Strengthening data security requirements: The FTC has proposed strengthening the COPPA Rule’s data security requirements by mandating that operators establish, implement, and maintain a written children’s personal information security program that contains safeguards that are appropriate to the sensitivity of the personal information collected from children.
  • Limits on data retention: The FTC also would strengthen the COPPA Rule’s data retention limits by allowing for personal information to be retained only for as long as necessary to fulfill the specific purpose for which it was collected. The proposed change would also prohibit operators from using retained information for any secondary purpose, and it explicitly states that operators cannot retain the information indefinitely. The Rule would also require operators to establish, and make public, a written data retention policy for children’s personal information.

In addition, the FTC has proposed changes to some definitions in the rule, including expanding the definition of “personal information” to include biometric identifiers, and stating that the Commission will consider marketing materials, representations to consumers or third parties, reviews by users or third parties, and the age of users on similar websites or services when determining whether a website or online service is directed to children.

Those who are interested in weighing in on the proposed rule changes will have 60 days to submit a comment to the FTC after the notice is published in the Federal Register, which should happen in the next two weeks.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post FTC Proposes Significant Changes to Online Protection Rules for Children appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-proposes-significant-changes-to-online-protection-rules-for-children/feed/ 0
FTC Bans Rite Aid from Using Facial Recognition for Five Years https://compliancechief360.com/ftc-bans-rite-aid-from-using-facial-recognition-for-five-years/ https://compliancechief360.com/ftc-bans-rite-aid-from-using-facial-recognition-for-five-years/#respond Wed, 20 Dec 2023 17:03:09 +0000 https://compliancechief360.com/?p=3380 The Federal Trade Commission has prohibited Rite Aid from using facial recognition technology for surveillance purposes for five years as part of a settlement of charges that the retailer used the technology improperly. The FTC had accused Rite Aid of failing to implement reasonable procedures and prevent harm to consumers in its use of facial Read More

The post FTC Bans Rite Aid from Using Facial Recognition for Five Years appeared first on Compliance Chief 360.

]]>
The Federal Trade Commission has prohibited Rite Aid from using facial recognition technology for surveillance purposes for five years as part of a settlement of charges that the retailer used the technology improperly. The FTC had accused Rite Aid of failing to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores. Rite Aid used the technology to attempt to identify known shoplifters and others who have caused trouble at stores in the past.

The proposed order will require Rite Aid to implement comprehensive safeguards to prevent these types of harm to consumers when deploying automated systems that use biometric information to track them or flag them as security risks. It also will require Rite Aid to discontinue using any such technology if it cannot control potential risks to consumers. To settle charges it violated a 2010 Commission data security order by failing to adequately oversee its service providers, Rite Aid will also be required to implement a robust information security program, which must be overseen by the company’s top executives.

“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”

False Accusations from Facial Recognition Failures

In a complaint filed in federal court, the FTC says that from 2012 to 2020, Rite Aid deployed artificial intelligence-based facial recognition technology in order to identify customers who may have been engaged in shoplifting or other problematic behavior. The complaint, however, charges that the company failed to take reasonable measures to prevent harm to consumers, who, as a result, were erroneously accused by employees of wrongdoing because facial recognition technology falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other troublemaker.

Preventing the misuse of biometric information is a high priority for the FTC, which issued a warning earlier this year that the agency would be closely monitoring this sector. Rite Aid did not inform consumers that it was using the technology in its stores and employees were discouraged from revealing such information. In addition, the FTC says Rite Aid’s actions disproportionately impacted people of color.

According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals—considered to be “persons of interest” because Rite Aid believed they engaged in or attempted to engage in criminal activity at one of its retail locations—along with their names and other information such as any criminal background data.

The system generated thousands of false-positive matches, the FTC says. For example, the technology sometimes matched customers with people who had originally been enrolled in the database based on activity thousands of miles away, or flagged the same person at dozens of different stores all across the United States, according to the complaint. Specifically, the complaint says Rite Aid failed to:

  • Consider and mitigate potential risks to consumers from misidentifying them, including heightened risks to certain consumers because of their race or gender.
  • Test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying it, including failing to seek any information from either vendor it used to provide the facial recognition technology about the extent to which the technology had been tested for accuracy;
  • Prevent the use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts;
  • Regularly monitor or test the accuracy of the technology after it was deployed, including by failing to implement or enforce any procedure for tracking the rate of false positive matches or actions that were taken based on those false positive matches; and
  • Adequately train employees tasked with operating facial recognition technology in its stores and flag that the technology could generate false positives. Even after Rite Aid switched to a technology that enabled employees to report a “bad match” and required employees to use it, the company did not take action to ensure employees followed this policy.

Failure to Safeguard Consumer’s Personal Data

In its complaint, the FTC also says Rite Aid violated its 2010 data security order with the Commission by failing to adequately implement a comprehensive information security program. Among other things, the 2010 order required Rite Aid to ensure its third-party service providers had appropriate safeguards to protect consumers’ personal data. In addition to the ban and required safeguards for automated biometric security or surveillance systems, other provisions of the proposed order prohibit Rite Aid from misrepresenting its data security and privacy practices and also require the company to:

  • Delete, and direct third parties to delete, any images or photos they collected because of Rite Aid’s facial recognition system as well as any algorithms or other products that were developed using those images and photos;
  • Notify consumers when their biometric information is enrolled in a database used in connection with a biometric security or surveillance system and when Rite Aid takes some kind of action against them based on an output generated by such a system;
  • Investigate and respond in writing to consumer complaints about actions taken against consumers related to an automated biometric security or surveillance system;
  • Provide clear and conspicuous notice to consumers about the use of facial recognition or other biometric surveillance technology in its stores;
  • Delete any biometric information it collects within five years;
  • Implement a data security program to protect and secure personal information it collects, stores, and shares with its vendors;
  • Obtain independent third-party assessments of its information security program; and
  • Provide the Commission with an annual certification from its CEO documenting Rite Aid’s adherence to the order’s provisions.

The complaint and order were filed in the Eastern District of Pennsylvania. Rite Aid is currently going through bankruptcy proceedings and the order will go into effect after approval from the bankruptcy court and the federal district court as well as modification of the 2010 order by the Commission.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post FTC Bans Rite Aid from Using Facial Recognition for Five Years appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-bans-rite-aid-from-using-facial-recognition-for-five-years/feed/ 0
FTC Expands Data Breach Reporting Requirements to Nonbank Financial Firms https://compliancechief360.com/ftc-expands-data-breach-reporting-requirements-to-nonbank-financial-firms/ https://compliancechief360.com/ftc-expands-data-breach-reporting-requirements-to-nonbank-financial-firms/#respond Mon, 30 Oct 2023 18:56:57 +0000 https://compliancechief360.com/?p=3321 The Federal Trade Commission has altered its data security rule, known as the Safeguards Rule, to require nonbank financial firms—including mortgage brokers, auto dealers, and payday lenders—to report data breaches to the agency, according to an announcement made Friday. The FTC’s Safeguards Rule requires non-banking financial institutions, such as mortgage brokers, motor vehicle dealers, and payday Read More

The post FTC Expands Data Breach Reporting Requirements to Nonbank Financial Firms appeared first on Compliance Chief 360.

]]>
The Federal Trade Commission has altered its data security rule, known as the Safeguards Rule, to require nonbank financial firms—including mortgage brokers, auto dealers, and payday lenders—to report data breaches to the agency, according to an announcement made Friday.

The FTC’s Safeguards Rule requires non-banking financial institutions, such as mortgage brokers, motor vehicle dealers, and payday lenders, to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe. In October 2021, the FTC announced it had finalized changes to the Safeguards Rule to strengthen the data security safeguards that financial institutions are required to put in place to protect their customers’ financial information. The FTC also sought comment on a proposed supplemental amendment to the Safeguards Rule that would require financial institutions to report certain data breaches and other security events to the Commission.

“Companies that are trusted with sensitive financial information need to be transparent if that information has been compromised,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The addition of this disclosure requirement to the Safeguards Rule should provide companies with additional incentive to safeguard consumers’ data.”

The amendment announced requires financial institutions to notify the FTC as soon as possible, and no later than 30 days after discovery, of a security breach involving the information of at least 500 consumers. Such an event requires notification if unencrypted customer information has been acquired without the authorization of the individual to which the information pertains. The notice to the FTC must include certain information about the event, such as the number of consumers affected or potentially affected.

The Commission voted 3-0 to publish the notice amending the Safeguards Rule in the Federal Register. It becomes effective 180 days after it is published there.   end slug

The post FTC Expands Data Breach Reporting Requirements to Nonbank Financial Firms appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-expands-data-breach-reporting-requirements-to-nonbank-financial-firms/feed/ 0
Health Care Patient Data Breaches Doubled in 2023, Reaching 87M https://compliancechief360.com/health-care-patient-data-breaches-doubled-in-2023-reaching-87m/ https://compliancechief360.com/health-care-patient-data-breaches-doubled-in-2023-reaching-87m/#respond Wed, 25 Oct 2023 15:39:08 +0000 https://compliancechief360.com/?p=3317 Health care companies are increasingly falling victim to sophisticated hacking efforts—including ransomware attacks—insider threats, and basic security flaws despite the highly confidential nature of patient data. According to new research by Atlas VPN, a virtual private network provider, 87 million patients in the United States had their personal information improperly exposed so far in 2023. Read More

The post Health Care Patient Data Breaches Doubled in 2023, Reaching 87M appeared first on Compliance Chief 360.

]]>
Health care companies are increasingly falling victim to sophisticated hacking efforts—including ransomware attacks—insider threats, and basic security flaws despite the highly confidential nature of patient data.

According to new research by Atlas VPN, a virtual private network provider, 87 million patients in the United States had their personal information improperly exposed so far in 2023. That is more than twice as much as last year when 37 million people had their data breached, making data privacy a top concern among health care compliance officers.

In 2022, over 37 million patients in the U.S. had their personal information exposed by healthcare organizations. However, breaches have skyrocketed this year. Just in the first half of 2023, hackers stole the data of over 41 million people. The third quarter marked an even greater cause for alarm, with 45 million more patients impacted.

Overall, there have already been 480 reported patient data breaches across the healthcare sector in the first three quarters of 2023 alone. This compares to only 373 total breaches during the entirety of 2022, highlighting the alarming acceleration in attacks.

The largest patient data incident so far was the HCA Healthcare breach, which impacted 11 million people. The second most significant breach happened at Managed Care of North America. The company found that an unauthorized third party accessed certain systems and stole the data of 8.9 million individuals.

This exponential growth highlights the ease with which hackers can access sensitive data. Medical records contain many personal details, making them a prime target. Yet healthcare organizations have not prioritized modern cybersecurity defenses to match the sophistication of criminal efforts.

“The sensitive nature of medical records makes them highly desirable targets for criminals, thus demanding the strongest security standards,” says Vilius Kardelis, a Cybersecurity writer at Atlas VPN. “Patients deserve to know their most personal information is safe, and providers must ensure that confidence. Healthcare has to view data protection as being just as critical as patient care.”

Most Vulnerable States

While healthcare data breaches impact patients nationwide, analysis shows certain states have been affected more than others.

California tops the list with 43 healthcare organizations afflicted by patient data breaches so far this year. The state’s massive population and concentration of healthcare providers likely make California a prime target.

New York comes in second, with 42 healthcare data breaches reported. Texas is third, with 38 healthcare entities experiencing breaches. Other states near the top include Massachusetts and Pennsylvania, with 31 and 30 breaches, respectively.

Vermont remains the only state with no reported healthcare breaches in 2023. Vermont’s small population and lack of major cities may allow it to fly under the radar of sophisticated hackers looking for maximum reward.

The data is based on the U.S. Department of Health and Human Services Office for Civil Rights database. Health organizations must report any health data breaches that impact 500 or more people to the secretary, which makes them public.   end slug


Joseph McCafferty is editor & publisher of Compliance Chief 360°

The post Health Care Patient Data Breaches Doubled in 2023, Reaching 87M appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/health-care-patient-data-breaches-doubled-in-2023-reaching-87m/feed/ 0