Data Governance Archives - Compliance Chief 360 https://compliancechief360.com/tag/data-governance/ The independent knowledge source for Compliance Officers Thu, 28 Mar 2024 21:32:57 +0000 en-US hourly 1 https://compliancechief360.com/wp-content/uploads/2021/06/cropped-Compliance-chief-logo-square-only-2021-32x32.png Data Governance Archives - Compliance Chief 360 https://compliancechief360.com/tag/data-governance/ 32 32 SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/ https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/#respond Thu, 28 Mar 2024 21:22:25 +0000 https://compliancechief360.com/?p=3523 According to a 2024 Cybersecurity Benchmarking Survey, 45 percent of surveyed compliance personnel from asset management, investment adviser and private market firms have expressed concerns about how the Securities and Exchange Commission (SEC) will enforce its newly developed cybersecurity rules.  The ACA Group and National Society of Compliance Professionals released the results from the survey Read More

The post SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance appeared first on Compliance Chief 360.

]]>
According to a 2024 Cybersecurity Benchmarking Survey, 45 percent of surveyed compliance personnel from asset management, investment adviser and private market firms have expressed concerns about how the Securities and Exchange Commission (SEC) will enforce its newly developed cybersecurity rules.

 The ACA Group and National Society of Compliance Professionals released the results from the survey that exhibited the sense of uncertainty surrounding the enforcement of the SEC’s cybersecurity rules. The results indicated that 44 percent of respondents surveyed said they are uncertain about how the SEC will enforce the rules, while 36 percent of compliance professionals cited concerns with complying with cyber incident reporting requirements and timeframes.

Mike Pappacena, a partner of ACA group, said in a statement that “it’s clear that regulatory compliance remains a top concern,” because nearly half of respondents expressed uncertainty about SEC enforcement. Pappacena said the survey results underline the importance of staying ahead of evolving cybersecurity threats.

The online survey consisted of around 310 investment adviser firms. All firm sizes were represented and responding firms belonged to varied business types, with most responses coming from asset managers, broker- dealers, and alternative investment advisors.

According to the survey, around 80% of the participants are confident in their firms’ ability to combat a cyber breach and that the top cyber threat that raised concern is payment fraud and business email compromise.

As a result of the SEC’s adopted rule, public companies are now required to disclose cybersecurity incidents they experience and to disclose on an annual basis material information regarding their cybersecurity risk management, strategy, and governance. The SEC rules now require companies to disclose any cybersecurity incident they determine to be material and to describe the material aspects of the incident’s nature, scope, and timing, as well as its material impact or reasonably likely material impact on the registrant.

The SEC additionally requires companies to describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as the material effects or reasonably likely material effects of risks from cybersecurity threats and previous cybersecurity incidents. The companies are provided a four-day grace period to disclose any cybersecurity incidents from the moment it deems the incident as material.

The SEC’s Consideration of Additional Cybersecurity Proposals

Cybersecurity has been a top priority for the SEC. The Commission is currently considering other cybersecurity-related proposals including one that would require brokers, dealers, investment advisers and companies to implement written policies and procedures concerning unauthorized access to or use of customer information. This would include procedures that are purposed for notifying customers of the incident.

The SEC is also proposing to broaden the scope of information covered by making changes to the requirements for safeguarding customer records and information, and for properly disposing of consumer report information.

Although these proposed measures signal a determined effort to enhance protection for investors, many are worried as to exactly how the SEC will enforce these newly adopted rules and proposals.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°    

The post SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/feed/ 0
EU Passes World’s First Comprehensive AI Law https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/ https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/#respond Fri, 15 Mar 2024 17:34:22 +0000 https://compliancechief360.com/?p=3512 The European Parliament approved the Artificial Intelligence Act (AIA), a regulation aimed at ensuring safety and compliance with fundamental rights, while boosting innovation within the artificial intelligence (AI) context. AIA, which is set take effect in increments over the next few years, ultimately establishes obligations for AI based on its potential risks and level of Read More

The post EU Passes World’s First Comprehensive AI Law appeared first on Compliance Chief 360.

]]>
The European Parliament approved the Artificial Intelligence Act (AIA), a regulation aimed at ensuring safety and compliance with fundamental rights, while boosting innovation within the artificial intelligence (AI) context. AIA, which is set take effect in increments over the next few years, ultimately establishes obligations for AI based on its potential risks and level of impact.

AIA is the world’s first set of regulations designed to oversee the field of AI. “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Brando Benifei, a European Union lawmaker from Italy. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI’s development.”

The new law comes at a point where many countries have introduced new AI rules. Last year, the Biden administration approved an executive order requiring AI companies to notify the government when developing AI models that may pose serious risk to national security, national economic security, or national public health and safety.

AIA Bans Specific Uses of AI

AIA bans certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive information and real-time and remote biometric identification systems, such as facial recognition. The use of AI to classify people based on behavior, socio-economic status or personal characteristics and to manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.

However, some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

AIA also introduces new transparency rules that mainly effect Generative AI. The regulation sets out multiple transparency requirements that this sort of AI will have to satisfy, including compliance with EU copyright law. This entails disclosing when content is generated by AI, implementing measures within the model to prevent the generation of illegal content, and providing summaries of copyrighted data utilized during the model’s training process. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

AIA is projected to become officially effective by May or June, pending some last procedural steps, including approval from EU member states. Implementation of provisions will occur gradually, with countries require to prohibit banned AI systems six months following the law’s enactment.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post EU Passes World’s First Comprehensive AI Law appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/feed/ 0
Boeing Pays U.S Department of State $51 million for Export Violations https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/ https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/#respond Fri, 08 Mar 2024 20:53:38 +0000 https://compliancechief360.com/?p=3501 Boeing has agreed to pay $51 million in a settlement with the U.S Department of State (DoS) to resolve around 200 violations of the Arms Control Act and the International Traffic in Arms Regulations (ITAR). The DoS and Boeing reached this settlement following an extensive compliance review by the Office of Defense Trade Controls Compliance Read More

The post Boeing Pays U.S Department of State $51 million for Export Violations appeared first on Compliance Chief 360.

]]>
Boeing has agreed to pay $51 million in a settlement with the U.S Department of State (DoS) to resolve around 200 violations of the Arms Control Act and the International Traffic in Arms Regulations (ITAR). The DoS and Boeing reached this settlement following an extensive compliance review by the Office of Defense Trade Controls Compliance in the Department’s Bureau of Political-Military Affairs.

The settlement between the Department and Boeing deals with Boeing’s unauthorized exports and retransfers of technical data to foreign employees and contractors; unauthorized exports of defense products, including unauthorized exports of technical data to China; and violations of license terms, and condition of Directorate of Defense Trade Controls authorizations.

“The U.S. government reviewed copies of the files referenced in this voluntary disclosure and determined that certain unauthorized exports to the [People’s Republic of China] caused harm to U.S. national security,” the State Department alleged. “The U.S. government also concluded that a certain unauthorized export to Russia created the potential for harm to U.S. national security.”

Boeing’s Violations in Detail

Specifically, the complaint alleged that multiple Boeing employees downloaded sensitive data in China that related to various defense products such as fighter jets, an airborne warning system and an attack helicopter.  Additionally, Boeing revealed that there were at least 80 instances across 18 countries where its employees downloaded sensitive ITAR-controlled data.

All of the alleged violations were voluntarily disclosed by the aerospace giant, and a considerable majority come before 2020. Boeing cooperated with the Department’s review of its investigations and has implemented many improvements to its compliance program since the issues were discovered.

“We are committed to our trade controls obligations, and we look forward to working with the State Department under the agreement announced today,” Boeing said in a public statement. “We are committed to continuous improvement of that program, and the compliance undertakings reflected in this agreement will help us advance that objective.”

The Terms of the Settlement

Under the terms of the 36-month Consent Agreement, Boeing will pay a civil penalty of $51 million. The Department has agreed to suspend $24 million of this amount on the condition that the funds will be used to strengthen the company’s compliance program. In addition, for a period of at least 24 months, the DoS will provide an external compliance officer to oversee Boeing in order to ensure that the company is adhering to the consent. This will also require two external audits of its ITAR compliance program and implement additional compliance measures.

This settlement demonstrates the Department’s pivotal role in furthering the national security and foreign policy of the U.S. by controlling the export of defense articles. The settlement serves as a reminder that when exporting defense products, companies must do so with the explicit authorization from the DoS. end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post Boeing Pays U.S Department of State $51 million for Export Violations appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/feed/ 0
France Fines Amazon $35 Million for Excessive Monitoring of Employees https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/ https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/#respond Thu, 25 Jan 2024 21:36:21 +0000 https://compliancechief360.com/?p=3445 The French Data Protection Authority (FDPA) issued a $35 million fine to Amazon for its excessive surveillance of its employees, including the company’s relentless tracking of employee performance and breaks, as well as the implementation of a video monitoring system without informed employee consent. The Commission Nationale de l’informatique et des Libertes (CNIL), ruled that Read More

The post France Fines Amazon $35 Million for Excessive Monitoring of Employees appeared first on Compliance Chief 360.

]]>
The French Data Protection Authority (FDPA) issued a $35 million fine to Amazon for its excessive surveillance of its employees, including the company’s relentless tracking of employee performance and breaks, as well as the implementation of a video monitoring system without informed employee consent.

The Commission Nationale de l’informatique et des Libertes (CNIL), ruled that Amazon’s system of measuring how quickly its employees scanned items and how long they took breaks was unnecessary and intrusive. The trillion-dollar company had implemented a “Stow Machine Gun Indicator” that required an item to be scanned in no less than 1.25 seconds after the previous one and was immediately alerted when an employee was not keeping up with the required pace.

Amazon also employed an “idle time indicator” and an “latency under ten minutes indicator” which alerted the company when an employee took a break for ten minutes or more and when a scanner was interrupted for up to ten minutes. Because of the large amount of pressure this system placed on Amazon’s employees, CNIL declared the system as extremely excessive, stating that it is “illegal to set up a system measuring work interruptions with such accuracy, potentially requiring employees to justify every break or interruption.”

In its ruling, CNIL examined the three indicators and determined that they led to an excessive monitoring of Amazon’s employees by the company. Specifically, the Commission found that the processing of the Stow Machine Gun Indicator meant that nearly any activity of an employee can be constantly monitored to the nearest second, and errors are common. The use of the other indicators made it possible to constantly monitor any time an employee’s scanner is interrupted even for a small amount of time.

Amazon Charged with Unauthorized Employee Surveillance

CNIL additionally stated that Amazon was holding on to employee surveillance data for an exorbitant amount of time of 31 days. Amazon should not be permitted to collect “every detail of the employee’s quality and productivity indicators collected using the scanners over the last month,” the ruling said. The Commission stated that it would be enough to review the surveillance data on a mere weekly basis.

FDPA also discovered that Amazon engaged in video surveillance of employees without their informed consent. This type of surveillance, without adequate notice, is a violation of the privacy protocols contained within General Data Protection Regulation, the French Authority said.

“We strongly disagree with the CNIL’s conclusions, which are factually incorrect, and we reserve the right to file an appeal,” Amazon said in a statement. “Warehouse management systems are industry standard and are necessary for ensuring the safety, quality and efficiency of operations and to track the storage of inventory and processing of packages on time and in line with customer expectations.”

This is not Amazon’s first time being charged with violations of the General Data Protection Regulation rules. In July 2021, Luxembourg issued the tech and retail giant a record fine of $886 million for violations stemming from its data processing practices.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post France Fines Amazon $35 Million for Excessive Monitoring of Employees appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/france-fines-amazon-35-million-for-excessive-monitoring-of-employees/feed/ 0
FTC Enacts First-Ever Ban on Selling Sensitive Location Data https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/ https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/#respond Wed, 10 Jan 2024 19:35:28 +0000 https://compliancechief360.com/?p=3413 The Federal Trade Commission (FTC) has prohibited data broker X-Mode Social and its successor Outlogic from sharing or selling sensitive location data as part of a settlement resulting from allegations that the company sold precise location data that could be used to track people’s visits to private locations. In its first settlement with a data Read More

The post FTC Enacts First-Ever Ban on Selling Sensitive Location Data appeared first on Compliance Chief 360.

]]>
The Federal Trade Commission (FTC) has prohibited data broker X-Mode Social and its successor Outlogic from sharing or selling sensitive location data as part of a settlement resulting from allegations that the company sold precise location data that could be used to track people’s visits to private locations.

In its first settlement with a data broker regarding the collection and sale of sensitive location information, the FTC also accused X-Mode Social and Outlogic of failing to put in place reasonable safeguards on the use of such information by third parties. This settlement ultimately represents the FTC’s strong commitment to preventing companies from selling their users’ sensitive location information.

“Geolocation data can reveal not just where a person lives and whom they spend time with but also, for example, which medical treatments they seek and where they worship. The FTC’s action against X-Mode makes clear that businesses do not have free license to market and sell Americans’ sensitive location data,” said FTC Chair Lina Khan. “By securing a first-ever ban on the use and sale of sensitive location data, the FTC is continuing its critical work to protect Americans from intrusive data brokers and unchecked corporate surveillance.”

X-Mode/Outlogic has been selling location information that is tied to each user’s phone. This data isn’t anonymous and can be used to track where a person with a specific phone has been. The company also sells consumer location data to hundreds of clients in industries ranging from real estate to finance, as well as private government contractors for their own purposes, such as advertising.

According to the FTC’s complaint, until May 2023, the company did not have any policies in place to remove sensitive locations from the location data it sold. The FTC says X-Mode/Outlogic did not implement any reasonable safeguards against use of the location data it sells, putting consumers’ sensitive and private information at risk.

The information revealed through the location data that X-Mode/Outlogic sold not only violated consumers’ privacy but also exposed them to “potential discrimination, physical violence, emotional distress, and other harms,” according to the complaint. The FTC also mentioned that the company didn’t make sure users of its apps, like Drunk Mode and Walk additionally Humanity, which used X-Mode/Outlogic’s software, were properly informed about how their location data would be used.

The company also failed to employ the necessary technical safeguards and oversight to ensure that it honored requests by some android users to opt out of tracking and personalized ads, according to the complaint. The FTC says these practices violate the FTC Act’s prohibition against unfair and deceptive practices.

FTC Mandates Preservation of User Location Privacy

In addition to the limits on sharing certain sensitive locations, the proposed order requires X-Mode/Outlogic to create a program to ensure it develops and maintains a comprehensive list of sensitive locations, and ensure it is not sharing, or selling data about such locations. Other provisions of the proposed order require the company to:

  • Delete or destroy all the location data it previously collected, and any products produced from this data unless it obtains consumer consent or ensures the data has been deidentified or rendered non-sensitive;
  • Ensure that companies that provide location data to X-Mode/Outlogic are obtaining informed consent from consumers for the collection, use and sale of the data or stop using such information;
  • Implement procedures to ensure that recipients of its location data do not associate the data with locations that provide services to LGBTQ+ people such as bars or service organizations, with locations of public gatherings of individuals at political or social demonstrations or protests, or use location data to determine the identity or location of a specific individual;
  • Provide a simple and easy-to-find way for consumers to withdraw their consent for the collection and use of their location data and for the deletion of any location data that was previously collected;
  • Provide a clear and conspicuous means for consumers to request the identity of any individuals and businesses to whom their personal data has been sold or shared or give consumers a way to delete their personal location data from the commercial databases of all recipients of the data; and
  • Establish and implement a comprehensive privacy program that protects the privacy of consumers’ personal information and also create a data retention schedule.

The proposed order also limits the company from collecting or using location data when consumers have opted out of targeted advertising or tracking or if the company cannot verify records showing that consumers have provided consent to the collection of location data.   end slug

The post FTC Enacts First-Ever Ban on Selling Sensitive Location Data appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/ftc-enacts-first-ever-ban-on-selling-sensitive-location-data/feed/ 0
Social Media Companies Challenge Law Requiring Parental Consent https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/ https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/#respond Tue, 09 Jan 2024 16:13:35 +0000 https://compliancechief360.com/?p=3404 Starting next week, the state of Ohio will require social media platforms to obtain parental consent before a child under the age of sixteen creates an account on their websites. However, an association by the name of NetChoice is now challenging this law on the belief that it violates the first amendment right of free Read More

The post Social Media Companies Challenge Law Requiring Parental Consent appeared first on Compliance Chief 360.

]]>
Starting next week, the state of Ohio will require social media platforms to obtain parental consent before a child under the age of sixteen creates an account on their websites. However, an association by the name of NetChoice is now challenging this law on the belief that it violates the first amendment right of free speech and free press.

“Ohio has decided that the government—in the first instance—should decide what speech is appropriate for minors on the internet,” NetChoice said. “The act restricts minors’ access to covered websites unless a parent provides ‘verifiable’ consent through a state-mandated means.” The association filed its lawsuit against the Ohio Attorney General Dave Yost.

This law, known as the Parental Notification by Social Media Operators Act (PNSNOA), would require platforms such as TikTok and Instagram to find a way to verify a child’s age, and obtain parental consent before the account can be made. NetChoice argues that, rather than imposing restrictions on users and platforms through required parental consent, a more effective approach would involve offering parents educational resources to enhance their understanding of potential risks associated with their child’s use of social media.

“We at NetChoice believe families equipped with educational resources are capable of determining the best approach to online services and privacy protections for themselves,” Chris Marchese, director of the NetChoice Litigation Center said. “With our lawsuit, we will fight to ensure all Ohioans can embrace digital tools without their privacy, security, and rights being thwarted.”

Under the law, these platforms are required to obtain parental consent by doing at least one of the following:

  • Require a parent or legal guardian to sign and return a form consenting to the child’s use or access.
  • If a payment is necessary, require the parent to use a credit card, debit card or other payment system that provides   n notification for each separate transaction.
  • Require a parent or legal guardian to call a telephone number to confirm the child’s use or access.
  • Require a parent or legal guardian to connect through videoconference to confirm the child’s use or access.
  • Verify a parent’s or legal guardian’s identity by checking their ID.

Parental Support for PNSNOA

This new law has raised much controversy between parents and tech companies. Of course, the companies themselves are opposed to PNSNOA however, some Ohio parents have expressed excitement over the law under the belief that it will safeguard their children from the dangers of social media. “I actually like it,” said Kenyetta Whipple, a mother from Youngstown, Ohio. “It’s scary nowadays because you can talk to a complete stranger with the click of a button, and I have a young daughter.  And I just feel like we’re living in a world where there is just too much free access to our kids. You can access someone’s locations, videos, and pictures, and a lot of children are naïve.”

The inherent purpose of this law to protect children from “harmful content” and potential obsessions. As Attorney General Yost noted in a statement, “in filing this lawsuit, these companies are determined to go around parents to expose children to harmful content and addict them to their platforms. These companies know that they are harming our children with addictive algorithms with catastrophic health and mental health outcomes.”

NetCoice has also sued California and Arkansas for the enactment of laws similar to PNSNOA. The association has prevailed in both cases and neither states were permitted place such restrictions on the social media platforms and its young users.end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post Social Media Companies Challenge Law Requiring Parental Consent appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/social-media-companies-challenge-law-requiring-parental-consent-to-use-their-platforms/feed/ 0
European Commission Looks to Streamline GDPR Enforcement https://compliancechief360.com/european-commission-looks-to-streamline-gdpr-enforcement/ https://compliancechief360.com/european-commission-looks-to-streamline-gdpr-enforcement/#respond Thu, 06 Jul 2023 17:17:15 +0000 https://compliancechief360.com/?p=3090 The European Commission has proposed a new regulation intended to streamline cooperation efforts between member countries when enforcing the EU’s data privacy laws, known as the General Data Protection Regulation (GDPR). If adopted, the new law would require European privacy regulators, known as data protection authorities (DPAs), to share more information upfront in major privacy Read More

The post European Commission Looks to Streamline GDPR Enforcement appeared first on Compliance Chief 360.

]]>
The European Commission has proposed a new regulation intended to streamline cooperation efforts between member countries when enforcing the EU’s data privacy laws, known as the General Data Protection Regulation (GDPR).

If adopted, the new law would require European privacy regulators, known as data protection authorities (DPAs), to share more information upfront in major privacy cases, such as those against Google, Amazon, and Meta, and more often settle such cases out of court. The objective of the proposed regulation is to speed up how its GDPR is enforced.

The Commission said it found in its June 2020 evaluation report that “procedural differences applied by DPAs hinder the smooth and effective functioning of the GDPR’s cooperation and dispute resolution mechanisms.” In October 2022, the European Data Protection Board (EDPB) provided to the Commission a list of procedural aspects that it said could benefit from further harmonization at EU level.

The proposed GDPR Procedural Regulation, announced July 4, addresses input from a wide variety of stakeholders, including the EDPB, representatives from civil society, businesses, academia, and legal practitioners, as well as member states.

According to the Commission, the proposed regulation would establish concrete procedural rules for DPAs when applying the GDPR in cases that affect individuals in more than one member state. “For example, it will introduce an obligation for the lead DPA to send a ‘summary of key issues’ to their counterparts concerned, identifying the main elements of the investigation and its views on the case, and therefore allowing them to provide their views early on,” the Commission said.

“Should a DPA disagree with the lead DPA’s assessment, this authority can request a joint operation or mutual assistance mechanism, as provided by the GDPR,” the European Commission stated in a Q&A document. “Should the DPAs still disagree on the scope of a complaint-based case, the proposal empowers the EDPB to adopt an urgent binding resolution to resolve such disagreement early in the process.

The ‘Right to Be Heard’

The proposed regulation would also provide data controllers and data processors under investigation with “the right to be heard at key stages in the procedure, including during dispute resolution by the EDPB,” the Commission said.

Thus, for businesses, the proposed regulation would clarify their due process rights when a DPA investigates a potential GDPR violation and bring more legal certainty, while facilitating early consensus-building in investigations for DPAs, the Commission said.

Aligning GDPR Procedural Rules

The new regulation provides detailed rules to support the smooth functioning of the cooperation and consistency mechanism established by the GDPR, aligning rules in the following areas:

  • Rights of complainants: The proposal aligns the requirements for a cross-border complaint to be admissible, removing the current obstacles brought by DPAs following different rules. It establishes common rights for complainants to be heard in cases where their complaints are fully or partially rejected. In cases where a complaint is investigated, the proposal specifies rules for them to be properly involved.
  • Rights of parties under investigation (controllers and processors): The proposal provides the parties under investigation with the right to be heard at key stages in the procedure, including during dispute resolution by the European Data Protection Board (EDPB), and clarifies the content of the administrative file and the parties’ rights of access to the file.
  • Streamlining cooperation and dispute resolution: Under the proposal, DPAs will be able to provide their views early on in investigations, and make use of all the tools of cooperation provided by the GDPR, such as joint investigations and mutual assistance. These provisions will enhance DPAs’ influence over cross-border cases, facilitate early consensus-building in the investigation, and reduce later disagreements. The proposal specifies detailed rules to facilitate the swift completion of the GDPR’s dispute resolution mechanism, and provides common deadlines for cross-border cooperation and dispute resolution.

“The harmonization of these procedural aspects will support the timely completion of investigations and the delivery of a swift remedies for individuals,” the Commission said in a statement.

“While the independent authorities are doing a tremendous work, it’s time to ensure we can operate faster and in a more decisive way, especially in serious cases in which one violation may have many victims across the EU,” said Věra Jourová, vice president of the European Commission for Values and Transparency. “Our proposal lays down rules to guarantee smooth cooperation among data protection authorities, supporting more vigorous enforcement, to the benefit of the people and businesses alike.”   end slug


Jaclyn Jaeger is a contributing editor at Compliance Chief 360° and a freelance business writer based in Manchester, New Hampshire.

The post European Commission Looks to Streamline GDPR Enforcement appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/european-commission-looks-to-streamline-gdpr-enforcement/feed/ 0
New FCC Task Force to Focus on Privacy and Data Protection https://compliancechief360.com/new-fcc-task-force-to-focus-on-privacy-and-data-protection/ https://compliancechief360.com/new-fcc-task-force-to-focus-on-privacy-and-data-protection/#respond Wed, 21 Jun 2023 15:58:22 +0000 https://compliancechief360.com/?p=3021 The Federal Communication Commission (FCC) recently announced the launch of a newly created Privacy and Data Protection Task Force, aimed at enhancing data privacy and security practices around consumer communications. This new task force, created by FCC Chairwoman Jessica Rosenworcel and led by FCC Enforcement Bureau Chief Loyaan Egal, is made up of FCC staff Read More

The post New FCC Task Force to Focus on Privacy and Data Protection appeared first on Compliance Chief 360.

]]>
The Federal Communication Commission (FCC) recently announced the launch of a newly created Privacy and Data Protection Task Force, aimed at enhancing data privacy and security practices around consumer communications.

This new task force, created by FCC Chairwoman Jessica Rosenworcel and led by FCC Enforcement Bureau Chief Loyaan Egal, is made up of FCC staff from across the agency, including in the areas of data breach reporting, enforcement, equipment authorization, and undersea cables.

“This FCC staff working group will coordinate across the agency on the rulemaking, enforcement, and public awareness needs in the privacy and data protection sectors,” the FCC stated. Priorities include “data breaches – such as those involving telecommunications providers and related to cyber intrusions – and supply chain vulnerabilities involving third-party vendors that service regulated communications providers.” The Privacy and Data Protection Task Force will streamline the FCC’s efforts in monitoring and enforcing compliance with relevant FCC rules on data privacy.

“We live in an era of always-on connectivity. Connection is no longer just convenient,” Rosenworcel said in a statement. “It fuels every aspect of modern civic and commercial life. To address the security challenges of this reality head-on, we must protect consumers’ information, ensure data security, and require cyber vigilance from every participant in our communications networks. This team of FCC experts will lead our efforts to protect consumer privacy.”  end slug


Jaclyn Jaeger is a contributing editor at Compliance Chief 360° and a freelance business writer based in Manchester, New Hampshire.

The post New FCC Task Force to Focus on Privacy and Data Protection appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/new-fcc-task-force-to-focus-on-privacy-and-data-protection/feed/ 0
How Employees Are Using ChatGPT on the Job https://compliancechief360.com/how-employees-are-using-chatgpt-on-the-job/ https://compliancechief360.com/how-employees-are-using-chatgpt-on-the-job/#respond Mon, 22 May 2023 20:49:26 +0000 https://compliancechief360.com/?p=2883 Office workers are using ChatGPT, or similar conversational AI models, for various tasks and purposes. Some of these uses can create risks of which compliance officers should be aware. Here are a few examples: 1) Virtual Assistants: ChatGPT can act as a virtual assistant, helping office workers manage their schedules, set reminders, and answer questions Read More

The post How Employees Are Using ChatGPT on the Job appeared first on Compliance Chief 360.

]]>
Office workers are using ChatGPT, or similar conversational AI models, for various tasks and purposes. Some of these uses can create risks of which compliance officers should be aware. Here are a few examples:

1) Virtual Assistants: ChatGPT can act as a virtual assistant, helping office workers manage their schedules, set reminders, and answer questions about their daily tasks. It can provide information about meetings, deadlines, and important events.

2) Customer Support: Office workers can use ChatGPT to handle customer inquiries and provide support. It can assist with common queries, provide troubleshooting steps, and offer basic information about products or services. ChatGPT can help reduce the workload of customer support teams and improve response times.


See related article: “Six Risks from AI Apps like ChatGPT that Compliance Leaders Should Know.”


3) Content Creation and Editing: ChatGPT can assist office workers in creating and editing content. It can generate drafts for articles, reports, or emails based on given prompts. Office workers can also use it to proofread and suggest improvements for their written work.

4) Research Assistance: When office workers need to gather information or conduct research, ChatGPT can help by providing relevant facts, statistics, or summaries. It can assist in finding reliable sources, extracting key details, and offering insights on various topics.

5) Language Translation: In international offices or when communicating with clients and colleagues from different countries, ChatGPT can be used to facilitate language translation. It can help office workers understand and translate written text or even provide spoken translations.

6) Data Analysis and Reporting: ChatGPT can assist office workers in analyzing data and generating reports. It can provide insights based on given datasets, answer queries about trends or patterns, and help present findings in a clear and concise manner.

7) Collaboration and Brainstorming: Office workers can use ChatGPT as a collaborative tool to facilitate brainstorming sessions. It can generate ideas, provide suggestions, and assist in the creative process. ChatGPT can act as a virtual team member, contributing to discussions and offering input.

These are just a few examples of how office workers can utilize ChatGPT to enhance productivity and streamline various tasks. The flexibility and versatility of conversational AI models make them valuable tools in an office environment.  end slug


(Editor’s Note: This article was written by ChatGPT.)

The post How Employees Are Using ChatGPT on the Job appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/how-employees-are-using-chatgpt-on-the-job/feed/ 0
Six Risks from ChatGPT that Compliance Leaders Should Know About https://compliancechief360.com/six-risks-from-ai-apps-like-chatgpt-that-compliance-leaders-should-know/ https://compliancechief360.com/six-risks-from-ai-apps-like-chatgpt-that-compliance-leaders-should-know/#respond Mon, 22 May 2023 20:25:23 +0000 https://compliancechief360.com/?p=2880 Artificial intelligence applications like ChatGPT are becoming common tools in the workplace to do everything from generating job descriptions, writing and editing reports, and to managing schedules (See related article, “How Employees Are Using ChatGPT on the Job“). But the apps aren’t perfect. In fact, they can be error prone and can even create new Read More

The post Six Risks from ChatGPT that Compliance Leaders Should Know About appeared first on Compliance Chief 360.

]]>
Artificial intelligence applications like ChatGPT are becoming common tools in the workplace to do everything from generating job descriptions, writing and editing reports, and to managing schedules (See related article, “How Employees Are Using ChatGPT on the Job“). But the apps aren’t perfect. In fact, they can be error prone and can even create new risks that companies must assess and manage.

Legal, internal audit, and compliance leaders should address their organization’s exposure to six specific ChatGPT risks, identified by consulting and research firm, Gartner. They must also consider what guardrails to establish to ensure responsible enterprise use of generative AI tools, according to Gartner.

“The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks,” said Ron Friedmann, senior director analyst at Gartner’s Legal & Compliance Practice. “Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties. Failure to do so could expose enterprises to legal, reputational, and financial consequences.”

The six risk from ChatGPT (and other AI apps) that legal, internal audit, and compliance leaders should evaluate include:

Risk 1: Fabricated and Inaccurate Answers

Perhaps the most common issue with ChatGPT and other LLM tools is a tendency to provide incorrect – although superficially plausible – information.

“ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann. “Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness, and actual usefulness before being accepted.”

Risk 2: Data Privacy and Confidentiality

Legal and compliance leaders should be aware that any information entered into ChatGPT, if chat history is not disabled, may become a part of its training dataset.

“Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” said Friedmann. “Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools.”

Risk 3: Model and Output Bias

Despite OpenAI’s efforts to minimize bias and discrimination in ChatGPT, known cases of these issues have already occurred, and are likely to persist despite ongoing, active efforts by OpenAI and others to minimize these risks.

“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant,” said Friedmann. “This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls.”

Risk 4: Intellectual Property (IP) and Copyright risks

ChatGPT in particular is trained on a large amount of internet data that likely includes copyrighted material. Therefore, it’s outputs have the potential to violate copyright or IP protections.

“ChatGPT does not offer source references or explanations as to how its output is generated,” said Friedmann. “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

Risk 5: Cyber Fraud Risks

Bad actors are already misusing ChatGPT to generate false information at scale, such as fake reviews and falsified video and audio impersonations. Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn’t intended for such as writing malware codes or developing phishing sites that resemble well-known sites.

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”

Risk 6: Consumer Protection Risks

Businesses that fail to disclose ChatGPT usage to consumers (for example, using it to create a customer support chatbot) run the risk of losing their customers’ trust and being charged with unfair practices under various laws. For instance, the California chatbot law mandates that in certain consumer interactions, organizations must clearly and conspicuously disclose that a consumer is communicating with a bot.

“Legal and compliance leaders need to ensure their organization’s ChatGPT use complies with all relevant regulations and laws, and appropriate disclosures have been made to customers,” said Friedmann.

The use of AI in the workplace is just getting started and is likely to balloon in the coming years. As these apps evolve and employees begin to use them in new and surprising ways, new risks are certain to emerge. Legal, risk, audit, and compliance professionals need to stay on top of these emerging risks and manage them to ensure they don’t cause negative consequences to the organization.  end slug


Joseph McCafferty is editor & publisher of Compliance Chief 360°

The post Six Risks from ChatGPT that Compliance Leaders Should Know About appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/six-risks-from-ai-apps-like-chatgpt-that-compliance-leaders-should-know/feed/ 0