Compliance Chief 360 https://compliancechief360.com/ The independent knowledge source for Compliance Officers Thu, 28 Mar 2024 21:32:57 +0000 en-US hourly 1 https://compliancechief360.com/wp-content/uploads/2021/06/cropped-Compliance-chief-logo-square-only-2021-32x32.png Compliance Chief 360 https://compliancechief360.com/ 32 32 SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/ https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/#respond Thu, 28 Mar 2024 21:22:25 +0000 https://compliancechief360.com/?p=3523 According to a 2024 Cybersecurity Benchmarking Survey, 45 percent of surveyed compliance personnel from asset management, investment adviser and private market firms have expressed concerns about how the Securities and Exchange Commission (SEC) will enforce its newly developed cybersecurity rules.  The ACA Group and National Society of Compliance Professionals released the results from the survey […]

The post SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance appeared first on Compliance Chief 360.

]]>
According to a 2024 Cybersecurity Benchmarking Survey, 45 percent of surveyed compliance personnel from asset management, investment adviser and private market firms have expressed concerns about how the Securities and Exchange Commission (SEC) will enforce its newly developed cybersecurity rules.

 The ACA Group and National Society of Compliance Professionals released the results from the survey that exhibited the sense of uncertainty surrounding the enforcement of the SEC’s cybersecurity rules. The results indicated that 44 percent of respondents surveyed said they are uncertain about how the SEC will enforce the rules, while 36 percent of compliance professionals cited concerns with complying with cyber incident reporting requirements and timeframes.

Mike Pappacena, a partner of ACA group, said in a statement that “it’s clear that regulatory compliance remains a top concern,” because nearly half of respondents expressed uncertainty about SEC enforcement. Pappacena said the survey results underline the importance of staying ahead of evolving cybersecurity threats.

The online survey consisted of around 310 investment adviser firms. All firm sizes were represented and responding firms belonged to varied business types, with most responses coming from asset managers, broker- dealers, and alternative investment advisors.

According to the survey, around 80% of the participants are confident in their firms’ ability to combat a cyber breach and that the top cyber threat that raised concern is payment fraud and business email compromise.

As a result of the SEC’s adopted rule, public companies are now required to disclose cybersecurity incidents they experience and to disclose on an annual basis material information regarding their cybersecurity risk management, strategy, and governance. The SEC rules now require companies to disclose any cybersecurity incident they determine to be material and to describe the material aspects of the incident’s nature, scope, and timing, as well as its material impact or reasonably likely material impact on the registrant.

The SEC additionally requires companies to describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as the material effects or reasonably likely material effects of risks from cybersecurity threats and previous cybersecurity incidents. The companies are provided a four-day grace period to disclose any cybersecurity incidents from the moment it deems the incident as material.

The SEC’s Consideration of Additional Cybersecurity Proposals

Cybersecurity has been a top priority for the SEC. The Commission is currently considering other cybersecurity-related proposals including one that would require brokers, dealers, investment advisers and companies to implement written policies and procedures concerning unauthorized access to or use of customer information. This would include procedures that are purposed for notifying customers of the incident.

The SEC is also proposing to broaden the scope of information covered by making changes to the requirements for safeguarding customer records and information, and for properly disposing of consumer report information.

Although these proposed measures signal a determined effort to enhance protection for investors, many are worried as to exactly how the SEC will enforce these newly adopted rules and proposals.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°    

The post SEC’s Cyber-Rule Enforcement a Prime Worry for Compliance appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/secs-cyber-rule-enforcement-a-prime-worry-for-compliance/feed/ 0
Anticipating a Scandal: Is AI a Ticking Time Bomb for Companies? https://compliancechief360.com/anticipating-a-scandal-is-ai-a-ticking-time-bomb-for-companies/ https://compliancechief360.com/anticipating-a-scandal-is-ai-a-ticking-time-bomb-for-companies/#respond Wed, 27 Mar 2024 18:02:17 +0000 https://compliancechief360.com/?p=3520 In recent times it seems that a corporate scandal is now an everyday occurrence and something which is far too frequent. The causes of a corporate scandal are also far too predictable: failures in corporate governance, poor risk management, compliance failures, unreliable intelligence, inadequate security, insufficient resilience, ineffective controls, and failures by assurance providers. A […]

The post Anticipating a Scandal: Is AI a Ticking Time Bomb for Companies? appeared first on Compliance Chief 360.

]]>
In recent times it seems that a corporate scandal is now an everyday occurrence and something which is far too frequent. The causes of a corporate scandal are also far too predictable: failures in corporate governance, poor risk management, compliance failures, unreliable intelligence, inadequate security, insufficient resilience, ineffective controls, and failures by assurance providers.

A forensic post-mortem investigation into the cause of any corporate scandal or failure will identify a number (or perhaps all) of these deficiencies and weaknesses. But what if we could do a “pre-mortem” investigation? What if we could predict the scandal in advance and head it off by considering all the ways things could go wrong?

Artificial Intelligence is the latest buzz among compliance departments, and for good reason: It has the potential to completely transform compliance as it does for many corporate functions. But there is also a downside in the potential for massive risks that stem from the use of AI. It’s not hard to imagine that these AI risks will come to pass at one or more organizations and blow up into the latest scandal of epic proportions.

Corporate defence cycleArtificial Intelligence technology as it evolves is certain to contribute to the creation, preservation, and destruction of stakeholder value in the coming weeks, months, and years. In terms of value creation, digital and smart technologies are already pervasive and AI in its many forms, such as machine learning, natural language processing, and computer vision, has the potential to leverage from this in order to add significant value, to make enormous contributions, and to create long-term positive impacts for society, the economy, and the environment.

It has the potential to solve complex problems and create opportunities that benefit all human beings and their ecosystems. Unfortunately, AI systems also have the potential for tremendous value destruction, and to cause an unimaginable level of harm and damage to human ecosystems, including business, society, and the planet.

Given the deficiencies and weaknesses described above in relation to everyday corporate scandals, one does not have to be a rocket scientist to predict that these same issues are also likely to arise in relation to AI technology. It is therefore incumbent upon our leaders to consider the potential serious impact, consequences, and repercussions which could emerge in relation to the development, deployment, use, and management of AI systems.

Anticipation of Future AI Hazards

An AI defense cycle can be viewed in terms of the corporate defense cycle, with the same unifying defense objectives representing the four cornerstones of a robust AI defense program.

Prudence and common-sense would suggest that it is therefore considered both logical and rational to anticipate the following deficiencies and weaknesses in relation to AI technology and to fully consider their potential for value destruction.

1. Failures in AI Governance
The current lack of a single comprehensive global AI governance framework has already led to inconsistencies and differences in approaches across various jurisdictions and regions. This is likely to result in potential conflicts between stakeholder groups with different priorities. The lack of a unified approach to AI governance can result in a lack of transparency, responsibility, and accountability which raises serious concerns about the social, moral, and ethical development and use of AI technologies. The ever-increasing lack of human oversight due to the development of autonomous AI systems simply reinforces these growing concerns. Prevailing planet governance issues are also likely to negatively impact on AI governance.

2. Poor AI Risk Management
Currently there appears to also be a fragmented global approach to AI risk management. Some suggest that this approach seems to overemphasize a focus on risk detection and reaction and underemphasize a focus on risk anticipation and prevention. It can tend to focus on addressing very specific risks (such as bias, privacy, security, and others) without giving due consideration to the broader systemic implications of AI development and its use.

Such a narrow focus on AI risks also fails to address the broader societal and economic impacts of AI and overlooks the interconnectedness of AI risks and their potential long-term consequences. Such short-sightedness is potentially very dangerous as it fails to address and keep pace with the potential damage of emerging risks while also failing to prepare for already flagged longer-term risks such as those posed by superintelligence or autonomous weapons systems and other potentially catastrophic outcomes.

3. AI Compliance Failures
AI compliance consists of a patchwork of AI laws, regulations, standards, and guidelines at national and international levels. This lack of harmonization of laws and regulations means that they are not in clear alignment, meaning they can be inconsistent in nature. This makes them both confusing and ineffective, making it difficult for stakeholders to comply with, and for regulators to supervise and enforce, especially across borders.

This lack of clear regulation and the lack of appropriate enforcement mechanisms makes it difficult to hold actors to account for their actions and can encourage non-compliance, violations, and serious misconduct leading to the potential unsafe, unethical, and illegal use of AI technology. The existence of algorithmic bias can result in a lack of fairness and lead to an exacerbation of existing inequality, prejudice, and discrimination. A major concern is that the current voluntary nature of AI compliance and an over reliance on self-regulation is not sufficient to address these potentially systemic issues.

4. Unreliable AI Intelligence
Unreliable intelligence can ultimately result in poor decision making in its many forms. Many AI algorithms can be opaque in nature and are often referred to in terms of a “Black Box,” which hinders the clarity and transparency of the development and deployment of AI systems. Their complexity makes it difficult to interpret or fully comprehend their algorithmic decision-making and other outputs.

It is therefore difficult for stakeholders to understand and mitigate their limitations, potential risks, and the existence of biases. This can further contribute to accountability gaps and make it difficult to hold AI developers and users accountable for their actions. AI development can also lack the necessary stakeholder engagement and public participation which can mean a lack of the required diversity of thought needed for the necessary alignment with social, moral, and ethical values. This lack of transparency and understanding can expose the AI industry to the threat of clandestine influence.

5. Inadequate AI Security
The global approach to AI security also appears to be somewhat disjointed. Data is one of the primary resources of the AI industry and AI systems collect and process vast amounts of data. AI technologies can be vulnerable to cyberattacks which can compromise assets (including sensitive data), disrupt operations, or even cause physical harm. If AI systems are not properly protected and secured, they could be infiltrated or hacked, resulting in unauthorized access to data and this could be used for malicious purposes such as data manipulation, identity theft, or fraud. This raises concerns about data breaches, data security, and personal privacy.

Indeed, AI powered malware could help malicious actors to evade existing cyber defenses thereby enabling them to inflict significant destruction to supply chains and critical infrastructure. Examples include damage to power grids, disruption of financial systems, and others.

6. Insufficient AI Resilience
The global approach to AI resilience is naturally impacted by the chaotic approach to some of the other areas noted above. Where AI systems are vulnerable to cyberattacks, this can allow hackers to disrupt operations leading to possible unforeseen circumstances which are difficult (if not impossible) to prepare for. This can impact on the reliability and robustness of the AI system and its ability to perform as intended in real-world conditions and to withstand, rebound, or recover from a shock, disturbance or disruption. AI systems can of course also make errors, incorrect diagnoses, faulty predictions, or other mistakes, sometimes termed “hallucinations.”

Where an AI system malfunctions or fails for whatever reason, this can lead to unintended consequences or safety hazards that could negatively impact on individuals, society, and the environment. This may be of particular concern in critical domains such as power, transportation, health, and finance.

7. Ineffective AI Controls
The global approach to AI controls also seems to be somewhat disorganized. Once AI systems are deployed, it can be difficult to change them. This can make it difficult to adapt to new circumstances or to correct mistakes. There are therefore some concerns that an overemphasis on automated technical controls (such as bias detection and mitigation) and not enough attention given to the importance of human control can create a false sense of security and mask the need for human control mechanisms.

As AI systems become more sophisticated, there is a real risk that humans will lose control over AI leading to situations where AI may make decisions that have unintended consequences that can significantly impact on individuals’ lives with potentially harmful consequences. Increasing the autonomy of AI systems without the appropriate safeguards and controls in place raises valid concerns about issues such as ethics, responsibility, accountability, and potential misuse.

8. Failures by AI Assurance Providers
There is currently no single, universally accepted framework or methodology for AI assurance. Different organizations and countries have varying approaches, leading to potential inconsistencies. The opaque nature and increasing complexity of AI can make it difficult to competently assess AI systems, creating gaps in assurance practices, and thus hindering the provision of comprehensive assurance.

The expertise required for effective AI assurance is often a scarce commodity and may be unevenly distributed which in turn can create accessibility challenges for disadvantaged areas and groups. The lack of transparency, ethical concerns, and the lack of comprehensive AI assurance can lead to an erosion of public trust and confidence in AI technologies which can hinder its adoption and potentially create resistance to its potential benefits. Given all of the above, the provision of AI assurance can be a potential minefield for assurance providers.

AI Value Destruction and Collateral Damage

Should any assurance provider worth their salt undertake to benchmark these eight critical AI defense components to a simple 5 step maturity model ( 1. Dispersed, 2. Centralized, 3. Global (Enterprise-wide), 4. Integrated, 5. Optimized) then each one of them individually and collectively would currently be rated as being only at step 1, Dispersed. This level of immaturity in itself represents a recipe for value destruction.

Corporate Defense UmbrellaEach of these eight critical AI defense components are interconnected, intertwined, and interdependent as individually each impacts on, and is impacted by, each of the other components. They represent links in a chain where the chain is only as strong as its weakest link. Collectively they can provide an essential cross-referencing system of checks and balances which helps to preserve AI stakeholder value. Therefore, the existence of deficiencies and weaknesses in more than one of these critical components can collectively result in exponential collateral damage to stakeholder value.

Examples of Potential Value Destruction

Misuse and Abuse:AI technologies can be misused and abused for all sorts of malicious purposes with potentially catastrophic results. They can be used for deception, to shape perceptions, or to spread propaganda. AI generated deepfake videos can be used to spread false or misleading information, or to damage reputations. Other sophisticated techniques could be used to spread misinformation and be used in targeted disinformation campaigns to manipulate public opinion, undermine democratic processes (elections and referendums) and destabilize social cohesion (polarization and radicalization).

Privacy, Criminality, and Discrimination: AI powered surveillance such as facial recognition can be intentionally used to invade people’s privacy. AI technologies can help in the exploitation of vulnerabilities in computer systems and can be applied for criminal purposes such as committing fraud or the theft of sensitive data (including intellectual property). They can be used for harmful purposes such as cyberattacks and to disrupt or damage critical infrastructure. In areas such as healthcare, employment, and the criminal justice system AI bias can lead to discrimination against certain groups of people based on their race, gender, or other protected characteristics. It could even create new forms of discrimination potentially undermining democratic freedoms and human rights.

Job Displacement and Societal Impact: As AI technologies (automobiles, drones, robotics, and others) become more sophisticated, they are increasingly capable of performing tasks that were once thought to require human workers. AI powered automation of tasks raises concerns relating to mass job displacement (typically the most vulnerable), and the potential for widespread unemployment which could impact on labor markets and social welfare, potentially leading to business upheaval, industry collapse, economic disruption, and social unrest. AI also has the potential to amplify and exacerbate existing power imbalances, economic disparities, and social inequalities.

Autonomous Weapons: AI controlled weapons systems could make decisions about when and who to target, or potentially make life-and-death decisions (and kill indiscriminately) without human intervention, raising concerns about ethical implications and potential unintended consequences. Indeed, the development and proliferation of autonomous weapons (including WMDs) and the competition among nations to deploy weapons with advanced AI capabilities raises fears of a new arms race and the increased risk of a nuclear war. This potential for misuse and possible unintended catastrophic consequences could ultimately pose a threat to international security, global safety, and ultimately humanity itself.

The Singularity: The ultimate threat potentially posed by the AI singularity or superintelligence is a complex and uncertain issue which may (or may not) still be on the distant horizon. The potential for AI to surpass human control and pose existential threats to humanity cannot and should not be dismissed and it is imperative that the appropriate safeguards and controls are in place to address this existential risk. The very possibility that AI could play a role in human extinction should at a minimum raise philosophical questions about our ongoing relationship with AI technology and our required duty of care. Existential threats cannot be ignored and addressing them cannot be deferred or postponed.

AI Value Preservation Imperative

Under the prevailing circumstances the occurrence of some or all of the above AI related hazards represent both an unacceptably high probability and impact, with potentially catastrophic outcomes for a large range of stakeholder groups. Serious stewardship, oversight, and regulation concerns have already been publicly expressed by AI experts, researchers, and backers. It represents an urgent issue which requires urgent action. This is one matter where a proactive approach is demanded, as we simply cannot accept a reactive approach to this challenge. In such a situation “prevention is much better than cure,” and it is certainly not a time to “Shut the barn door after the horse has bolted.

Addressing this matter is by no means an easy task but it is one which needs to be viewed as a compulsory or mandatory obligation. Like many other challenges facing human beings on Planet Earth this is one that will require global engagement and a global solidarity of purpose.

AI value preservation requires a harmonization of global, international, and national frameworks, regulations, and practices to help ensure consistent implementation and the avoidance of fragmentation. This means greater coordination, knowledge sharing, and wider adoption in order to help ensure a robust and equitable global AI defense program.

This needs to begin with a much greater appreciation and understanding of the nature of AI value dynamics (creation, preservation, and destruction) in order to help foster responsible innovation. Sooner rather than later, the approach to due diligence needs to include adopting a holistic, multi-dimensional and systematic vision that involves an integrated, inter-disciplinary, and cross-functional approach to AI value preservation. Such an approach can help contribute to a more peaceful and secure world, by creating a more trustworthy, responsible, and beneficial AI ecosystem for all.

This pre-mortem simply cannot be allowed to develop into a post-mortem!   end slug


Sean Lyons is a value preservation & corporate defense author, pioneer, and thought leader. He is the author of “Corporate Defense and the Value Preservation Imperative: Bulletproof Your Corporate Defense Program.”

The post Anticipating a Scandal: Is AI a Ticking Time Bomb for Companies? appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/anticipating-a-scandal-is-ai-a-ticking-time-bomb-for-companies/feed/ 0
EU Passes World’s First Comprehensive AI Law https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/ https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/#respond Fri, 15 Mar 2024 17:34:22 +0000 https://compliancechief360.com/?p=3512 The European Parliament approved the Artificial Intelligence Act (AIA), a regulation aimed at ensuring safety and compliance with fundamental rights, while boosting innovation within the artificial intelligence (AI) context. AIA, which is set take effect in increments over the next few years, ultimately establishes obligations for AI based on its potential risks and level of […]

The post EU Passes World’s First Comprehensive AI Law appeared first on Compliance Chief 360.

]]>
The European Parliament approved the Artificial Intelligence Act (AIA), a regulation aimed at ensuring safety and compliance with fundamental rights, while boosting innovation within the artificial intelligence (AI) context. AIA, which is set take effect in increments over the next few years, ultimately establishes obligations for AI based on its potential risks and level of impact.

AIA is the world’s first set of regulations designed to oversee the field of AI. “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Brando Benifei, a European Union lawmaker from Italy. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI’s development.”

The new law comes at a point where many countries have introduced new AI rules. Last year, the Biden administration approved an executive order requiring AI companies to notify the government when developing AI models that may pose serious risk to national security, national economic security, or national public health and safety.

AIA Bans Specific Uses of AI

AIA bans certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive information and real-time and remote biometric identification systems, such as facial recognition. The use of AI to classify people based on behavior, socio-economic status or personal characteristics and to manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.

However, some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

AIA also introduces new transparency rules that mainly effect Generative AI. The regulation sets out multiple transparency requirements that this sort of AI will have to satisfy, including compliance with EU copyright law. This entails disclosing when content is generated by AI, implementing measures within the model to prevent the generation of illegal content, and providing summaries of copyrighted data utilized during the model’s training process. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

AIA is projected to become officially effective by May or June, pending some last procedural steps, including approval from EU member states. Implementation of provisions will occur gradually, with countries require to prohibit banned AI systems six months following the law’s enactment.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post EU Passes World’s First Comprehensive AI Law appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/eu-passes-worlds-first-comprehensive-ai-law/feed/ 0
DoJ Secures $2.7 Billion Through False Claim Act Actions In 2023 https://compliancechief360.com/doj-secures-2-7-billion-through-false-claim-act-actions-in-2023/ https://compliancechief360.com/doj-secures-2-7-billion-through-false-claim-act-actions-in-2023/#respond Tue, 12 Mar 2024 21:53:27 +0000 https://compliancechief360.com/?p=3492 The U.S. Department of Justice (DoJ) announced that it obtained more than $2.68 billion in settlement and judgements under the False Claims Act. The government and whistleblowers were party to 543 settlements and judgments, the highest number of settlements and judgments in a single year. Recoveries since 1986, when Congress substantially strengthened the civil False […]

The post DoJ Secures $2.7 Billion Through False Claim Act Actions In 2023 appeared first on Compliance Chief 360.

]]>
The U.S. Department of Justice (DoJ) announced that it obtained more than $2.68 billion in settlement and judgements under the False Claims Act. The government and whistleblowers were party to 543 settlements and judgments, the highest number of settlements and judgments in a single year. Recoveries since 1986, when Congress substantially strengthened the civil False Claims Act, now total more than $75 billion.

“Protecting taxpayer dollars from fraud and abuse is of paramount importance to the Department of Justice – and these enforcement figures prove it,” said Acting Associate Attorney General Benjamin Mizer. “The False Claims Act remains one of our most important tools for rooting out fraud, ensuring that public funds are spent properly, and safeguarding critical government programs.”

The False Claims Act penalizes those who knowingly and falsely claim money from the United States or knowingly fail to pay money owed to the United States. Its purpose is to safeguard government programs and operations that provide access to medical care, support our military and first responders, protect American businesses and workers, help build and repair infrastructure, offer disaster and other emergency relief, and provide many other critical services and benefits.

“As the record-breaking number of recoveries reflects, those who seek to defraud the government will pay a high price,” said Assistant Attorney General Boynton, head of the DoJ’s Civil Division. “The American taxpayers deserve to know that their hard-earned dollars will be used to support the important government programs and operations for which they were intended.”

Of the more than $2.68 billion in False Claims Act settlements and judgments reported by the DoJ this past fiscal year, over $1.8 billion related to matters that involved the health care industry, including managed care providers, hospitals, pharmacies, laboratories, long-term acute care facilities, and physicians. The $1.8 billion only include recoveries arising from federal losses, but in many of these cases, the department was instrumental in recovering additional amounts for state Medicaid programs.

Health Care Fraud

In 2023, health care fraud remained a leading source of False Claims Act settlements and judgments. These recoveries restore funds to federal programs such as Medicare, Medicaid, and TRICARE, the health care program for service members and their families. But just as important, enforcement of the False Claims Act deters others who might try to cheat the system for their own gain, and in many cases, also protects patients from medically unnecessary or potentially harmful actions. As in years past, the act was used to pursue matters involving a wide array of health care providers, goods, and services.

In one of its largest settlements, The Cigna Group agreed to pay more than $172 million for allegations that it submitted inaccurate diagnosis codes for its Medicare Advantage Plan enrollees in order to increase its payments from Medicare. The DoJ obtained another $22.5 million from Martin’s Point Health Care for similar allegations.

The Department also received numerous settlements and judgements from companies who engaged in unnecessary services and substantial care, the opioid epidemic, and unlawful kickbacks.

Although these actions exhibit the DoJ’s focus on the healthcare industry, the recoveries in 2023 also reflect the department’s focus on key enforcement priorities, including fraud in pandemic relief programs and alleged violations of cybersecurity requirements in government contracts and grants. However, considering the trends of the past year, it is reasonable to anticipate that healthcare will continue to be a primary focus for the Department.   end slug


 

The post DoJ Secures $2.7 Billion Through False Claim Act Actions In 2023 appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/doj-secures-2-7-billion-through-false-claim-act-actions-in-2023/feed/ 0
FCA Announces Its Pledge to Enhance Efficiency in Enforcing Cases. https://compliancechief360.com/fca-announces-its-pledge-to-enhance-efficiency-in-enforcing-cases/ https://compliancechief360.com/fca-announces-its-pledge-to-enhance-efficiency-in-enforcing-cases/#respond Mon, 11 Mar 2024 19:53:23 +0000 https://compliancechief360.com/?p=3495 The Financial Conduct Authority (FCA) announced that it is now committed to carrying out enforcement cases more quickly as the organization seeks to increase the deterrent impact of its enforcement actions. The FCA is a private financial regulatory body in the United Kingdom (UK). The Authority’s purpose is to regulate the conduct of many of […]

The post FCA Announces Its Pledge to Enhance Efficiency in Enforcing Cases. appeared first on Compliance Chief 360.

]]>
The Financial Conduct Authority (FCA) announced that it is now committed to carrying out enforcement cases more quickly as the organization seeks to increase the deterrent impact of its enforcement actions.

The FCA is a private financial regulatory body in the United Kingdom (UK). The Authority’s purpose is to regulate the conduct of many of the businesses located in the UK in order to maintain integrity and stability within the financial markets. In other words, it is the UK’s version of the Security and Exchange Commission, except that the FCA works independently from the country’s government.

In the future the FCA will focus on a streamlined portfolio of cases, aligned to its strategic priorities where it can deliver the greatest impact. The FCA will also close those cases where no outcome is achievable, more quickly.

Transparency Is The Top Priority

The FCA’s hopeful strategy represents a significant departure from the current practice, where investigations are only announced in very limited circumstances. As part of the new approach the FCA has begun a consultation on plans to be more transparent when an enforcement investigation is opened. Under the plans the FCA will publish updates on investigations as appropriate and be open about when cases have been closed with no enforcement outcome.

“By being more transparent when we open and close cases we can enhance public confidence by showing that we are on the case,” said Therese Chambers, Executive Director of Enforcement and Market Oversight at the FCA. “At the same time, we will amplify the deterrent impact of our work by enabling firms to understand the types of serious failings that can lead to an investigation, helping them to change their own behavior more quickly. Greater transparency will also drive greater accountability for us as an enforcement agency.”

Steve Smart, also Executive Director Enforcement and Market Oversight added onto Director Chambers’ announcement in stating the following: “Reducing and preventing serious harm is a cornerstone of our strategy. By delivering faster, targeted and transparent enforcement, we will reduce harm and deter others. We will also make greater use of our intervention powers to stop harm in real time.”

Any decision to announce an investigation would be taken on a case-by-case basis and depends on a variety of factors which will indicate whether to do so is in the public interest. These include whether the announcement will protect and enhance the integrity of the UK financial system, reassure the public that the FCA is taking appropriate action, or assist in any investigations.

Announcing an investigation does not mean that the FCA has decided whether there has been misconduct or breaches of its requirements. Investigations into individuals will be different and the FCA will not usually announce these types of investigations.

These changes exhibit a significant shift in the FCA’s enforcement approach. In implementing this new approach, the FCA hopes to deliver quick resolutions to each of its enforcement actions in an efficient manner. Through this, the Authority hopes to implement a sense of transparency, accountability and integrity within the UK’s financial marketplace.   end slug


 

The post FCA Announces Its Pledge to Enhance Efficiency in Enforcing Cases. appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/fca-announces-its-pledge-to-enhance-efficiency-in-enforcing-cases/feed/ 0
Boeing Pays U.S Department of State $51 million for Export Violations https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/ https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/#respond Fri, 08 Mar 2024 20:53:38 +0000 https://compliancechief360.com/?p=3501 Boeing has agreed to pay $51 million in a settlement with the U.S Department of State (DoS) to resolve around 200 violations of the Arms Control Act and the International Traffic in Arms Regulations (ITAR). The DoS and Boeing reached this settlement following an extensive compliance review by the Office of Defense Trade Controls Compliance […]

The post Boeing Pays U.S Department of State $51 million for Export Violations appeared first on Compliance Chief 360.

]]>
Boeing has agreed to pay $51 million in a settlement with the U.S Department of State (DoS) to resolve around 200 violations of the Arms Control Act and the International Traffic in Arms Regulations (ITAR). The DoS and Boeing reached this settlement following an extensive compliance review by the Office of Defense Trade Controls Compliance in the Department’s Bureau of Political-Military Affairs.

The settlement between the Department and Boeing deals with Boeing’s unauthorized exports and retransfers of technical data to foreign employees and contractors; unauthorized exports of defense products, including unauthorized exports of technical data to China; and violations of license terms, and condition of Directorate of Defense Trade Controls authorizations.

“The U.S. government reviewed copies of the files referenced in this voluntary disclosure and determined that certain unauthorized exports to the [People’s Republic of China] caused harm to U.S. national security,” the State Department alleged. “The U.S. government also concluded that a certain unauthorized export to Russia created the potential for harm to U.S. national security.”

Boeing’s Violations in Detail

Specifically, the complaint alleged that multiple Boeing employees downloaded sensitive data in China that related to various defense products such as fighter jets, an airborne warning system and an attack helicopter.  Additionally, Boeing revealed that there were at least 80 instances across 18 countries where its employees downloaded sensitive ITAR-controlled data.

All of the alleged violations were voluntarily disclosed by the aerospace giant, and a considerable majority come before 2020. Boeing cooperated with the Department’s review of its investigations and has implemented many improvements to its compliance program since the issues were discovered.

“We are committed to our trade controls obligations, and we look forward to working with the State Department under the agreement announced today,” Boeing said in a public statement. “We are committed to continuous improvement of that program, and the compliance undertakings reflected in this agreement will help us advance that objective.”

The Terms of the Settlement

Under the terms of the 36-month Consent Agreement, Boeing will pay a civil penalty of $51 million. The Department has agreed to suspend $24 million of this amount on the condition that the funds will be used to strengthen the company’s compliance program. In addition, for a period of at least 24 months, the DoS will provide an external compliance officer to oversee Boeing in order to ensure that the company is adhering to the consent. This will also require two external audits of its ITAR compliance program and implement additional compliance measures.

This settlement demonstrates the Department’s pivotal role in furthering the national security and foreign policy of the U.S. by controlling the export of defense articles. The settlement serves as a reminder that when exporting defense products, companies must do so with the explicit authorization from the DoS. end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post Boeing Pays U.S Department of State $51 million for Export Violations appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/boeing-pays-u-s-department-of-state-51-million-for-export-violations/feed/ 0
SEC Pulls Back on Climate Disclosure in Final Version of the Rules https://compliancechief360.com/sec-pulls-back-on-climate-disclosure-in-final-version-of-the-rules/ https://compliancechief360.com/sec-pulls-back-on-climate-disclosure-in-final-version-of-the-rules/#respond Thu, 07 Mar 2024 20:51:20 +0000 https://compliancechief360.com/?p=3503 The Securities and Exchange Commission voted to finalize its rules on climate-change disclosures titled,  “The Enhancement and Standardization of Climate-Related Disclosures for Investors.” In an about face that the SEC began signaling last month, the Commission cut key provisions from the proposal, including a requirement to disclose Scope 3 emissions, that proponents say would have […]

The post SEC Pulls Back on Climate Disclosure in Final Version of the Rules appeared first on Compliance Chief 360.

]]>
The Securities and Exchange Commission voted to finalize its rules on climate-change disclosures titled,  “The Enhancement and Standardization of Climate-Related Disclosures for Investors.” In an about face that the SEC began signaling last month, the Commission cut key provisions from the proposal, including a requirement to disclose Scope 3 emissions, that proponents say would have given investors important insight into what companies are doing in terms of their response to climate change.

In a landmark decision, the SEC voted to implement new regulations requiring public companies to disclose climate change risks and their greenhouse gas emissions. This long-awaited ruling marks a significant shift in corporate reporting, potentially impacting thousands of companies and bringing the U.S. closer to alignment with global efforts on climate transparency.

The SEC’s final rule follows a two-year process that included a proposed rule in March 2022 and culminated in a vote split along party lines. The new regulations aim to address investor concerns about the financial implications of climate change on businesses.

“These final rules build on past requirements by mandating material climate risk disclosures by public companies and in public offerings,” said SEC Chair Gary Gensler in a statement. “The rules will provide investors with consistent, comparable, and decision-useful information, and issuers with clear reporting requirements. Further, they will provide specificity on what companies must disclose, which will produce more useful information than what investors see today. They will also require that climate risk disclosures be included in a company’s SEC filings, such as annual reports and registration statements rather than on company websites, which will help make them more reliable.”

The Final Rules in Detail

Specifically, the final rules will require a public company to disclose:

  • Climate-related risks that have had or are reasonably likely to have a material impact on the registrant’s business strategy, results of operations, or financial condition;
  • The actual and potential material impacts of any identified climate-related risks on the registrant’s strategy, business model, and outlook;
  • If, as part of its strategy, a registrant has undertaken activities to mitigate or adapt to a material climate-related risk, a quantitative and qualitative description of material expenditures incurred and material impacts on financial estimates and assumptions that directly result from such mitigation or adaptation activities;
  • Specified disclosures regarding a registrant’s activities, if any, to mitigate or adapt to a material climate-related risk including the use, if any, of transition plans, scenario analysis, or internal carbon prices;
  • Any oversight by the board of directors of climate-related risks and any role by management in assessing and managing the registrant’s material climate-related risks;
  • Any processes the registrant has for identifying, assessing, and managing material climate-related risks and, if the registrant is managing those risks, whether and how any such processes are integrated into the registrant’s overall risk management system or processes;
  • Information about a registrant’s climate-related targets or goals, if any, that have materially affected or are reasonably likely to materially affect the registrant’s business, results of operations, or financial condition. Disclosures would include material expenditures and material impacts on financial estimates and assumptions as a direct result of the target or goal or actions taken to make progress toward meeting such target or goal;
  • For large accelerated filers (LAFs) and accelerated filers (AFs) that are not otherwise exempted, information about material Scope 1 emissions and/or Scope 2 emissions;
  • For those required to disclose Scope 1 and/or Scope 2 emissions, an assurance report at the limited assurance level, which, for an LAF, following an additional transition period, will be at the reasonable assurance level;
  • The capitalized costs, expenditures expensed, charges, and losses incurred as a result of severe weather events and other natural conditions, such as hurricanes, tornadoes, flooding, drought, wildfires, extreme temperatures, and sea level rise, subject to applicable one percent and de minimis disclosure thresholds, disclosed in a note to the financial statements;
  • The capitalized costs, expenditures expensed, and losses related to carbon offsets and renewable energy credits or certificates (RECs) if used as a material component of a registrant’s plans to achieve its disclosed climate-related targets or goals, disclosed in a note to the financial statements; and
  • If the estimates and assumptions a registrant uses to produce the financial statements were materially impacted by risks and uncertainties associated with severe weather events and other natural conditions or any disclosed climate-related targets or transition plans, a qualitative description of how the development of such estimates and assumptions was impacted, disclosed in a note to the financial statements.

Climate-Change Disclosure Light?

While these rules mark a significant change in U.S. climate disclosure regulations, they are narrower compared to the SEC’s 2022 draft, with the Commission reducing stringency in several areas.

For example, the SEC removed the Scope 3 disclosures requirement which, if it were in effect, would have obligated specific companies to provide data about the emissions generated by their suppliers and customers.

While the proposed rule required disclosure of Scope 1 (direct emissions from company operations) and Scope 2 (emissions associated with the purchase of energy) in all cases, the final rule requires this disclosure only if the company deems these emissions to be “material.”

The moves angered some proponents of stricter climate-change proposals. “While we are pleased to see the SEC issue its long-awaited climate disclosure rule, we are disappointed the final rule falls far short of what consumers and investors deserve: full, transparent, reliable, and comparable disclosure of greenhouse gas emissions through direct, indirect, supply chain, and product use,” said Cathy Cowan Becker, responsible finance campaign director at environmental advocacy group, Green America. “It’s unfortunate that the SEC would bend to pressure from corporate interests to significantly weaken its proposed rule.”

Others applauded the change. “While the SEC appears to have moved away from some of the most troubling provisions in the original proposal, questions remain about several aspects of the final climate disclosure rule, said Business Roundtable CEO Joshua Bolten in a statement. “The rule contains multiple, highly complex provisions that have not been subject to notice and comment. Business Roundtable will continue to evaluate the rule to better assess its impact and scope.”

The final rules will become effective 60 days following publication of the adopting release in the Federal Register, and compliance dates for the rules will be phased in for all registrants, with the compliance date dependent on the registrant’s filer status.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post SEC Pulls Back on Climate Disclosure in Final Version of the Rules appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/sec-pulls-back-on-climate-disclosure-in-final-version-of-the-rules/feed/ 0
FINRA Fines Goldman Sachs For Trade-Monitoring Failures https://compliancechief360.com/finra-fines-goldman-sachs-for-trade-monitoring-failures/ https://compliancechief360.com/finra-fines-goldman-sachs-for-trade-monitoring-failures/#respond Fri, 23 Feb 2024 20:08:53 +0000 https://compliancechief360.com/?p=3466 The Financial Industry Regulatory Authority (FINRA) recently announced that it has fined Goldman Sachs for not properly monitoring trades ranging from 2009 to 2023. The banking giant has agreed to pay around $500,000 in order to settle these claims, but did not admit or deny any of FINRA’s findings. According to Goldman’s letter of acceptance […]

The post FINRA Fines Goldman Sachs For Trade-Monitoring Failures appeared first on Compliance Chief 360.

]]>
The Financial Industry Regulatory Authority (FINRA) recently announced that it has fined Goldman Sachs for not properly monitoring trades ranging from 2009 to 2023. The banking giant has agreed to pay around $500,000 in order to settle these claims, but did not admit or deny any of FINRA’s findings.

According to Goldman’s letter of acceptance to FINRA, it was charged with violating FINRA Rule 3110(a) and its predecessor, NASD Rule 3010(a) which requires banks to “establish and maintain a system” for the purpose of ensuring that each of its employees complies with the applicable securities laws and regulations. A violation of these rules is also a violation of FINRA Rule 2010, which requires members “to observe high standards of commercial honor and just and equitable principles of trade in the conduct of their business”.

According to FINRA, Goldman failed to include warrants, rights, units, and certain equity securities in nine surveillance reports designed to identify potentially manipulative proprietary and customer trading. “The firm failed to detect that nine surveillance reports for potentially manipulative trading excluded various securities types,” FINRA said. “By failing to have a reasonably designed supervisory system, Goldman violated NASD Rule 3010 and FINRA Rules 3110 and 2010.”

Goldman’s System Failures

As a result of the gaps in its surveillance reports, Goldman could not perform reasonable monitoring of trading activity for potential manipulation. The nine affected reports would have identified approximately 5,000 alerts for potentially manipulative trading activity in those securities from February 2009 through mid-April 2023. Goldman added the missing securities to the surveillance reports either in response to FINRA’s investigation or through the firm’s adoption of new surveillance reports. By April 2023, Goldman had finished remediating all surveillance reports.

Goldman’s supervisory system, including its written procedures, also did not require a review of its automated surveillance reports to ensure they included all relevant securities traded as part of the firm’s business. Because of this, the bank did not notice that nine surveillance reports, which could have indicated manipulative trading, overlooked warrants, rights, units, and specific equity securities.

Goldman Sachs encountered additional scrutiny from both FINRA and the U.S. Securities and Exchange Commission (SEC) due to lapses in reporting. In September, the firm settled with FINRA and the SEC, agreeing to pay a total of $12 million. The allegations centered on Goldman’s failure to fulfill its recordkeeping and reporting duties, as it provided inaccurate trading data in response to numerous regulatory requests.

The SEC and FINRA asserted that for approximately ten years, Goldman provided regulators with securities trading records containing inaccuracies or omissions related to millions of transactions involving the firm.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post FINRA Fines Goldman Sachs For Trade-Monitoring Failures appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/finra-fines-goldman-sachs-for-trade-monitoring-failures/feed/ 0
Van Eck Settles SEC Claim Over Undisclosed Social Influencer Involvement https://compliancechief360.com/van-eck-settles-sec-claim-over-undisclosed-social-influencer-involvement/ https://compliancechief360.com/van-eck-settles-sec-claim-over-undisclosed-social-influencer-involvement/#respond Fri, 16 Feb 2024 20:24:01 +0000 https://compliancechief360.com/?p=3484 The Securities and Exchange Commission (SEC) announced that registered investment adviser Van Eck Associates has agreed to pay a $1.75 million to settle charges that it failed to disclose a social media influencer’s role in the launch of an exchange-traded fund (ETF) in 2021. According to the SEC’s order, in March 2021, Van Eck Associates […]

The post Van Eck Settles SEC Claim Over Undisclosed Social Influencer Involvement appeared first on Compliance Chief 360.

]]>
The Securities and Exchange Commission (SEC) announced that registered investment adviser Van Eck Associates has agreed to pay a $1.75 million to settle charges that it failed to disclose a social media influencer’s role in the launch of an exchange-traded fund (ETF) in 2021.

According to the SEC’s order, in March 2021, Van Eck Associates launched the VanEck Social Sentiment ETF to track an index based on “positive insights” from social media and other data. The ETF launched on the New York Stock Exchange under the ticker BUZZ. The provider of that index told Van Eck Associates that it planned to retain a well-known and controversial social media influencer to promote the ETF. While the SEC did not name the influencer, several media sources claim that Van Eck partnered with Dave Portnoy, founder of the popular sports and pop culture website Barstool Sports, to help launch the fund.

To incentivize the influencer’s marketing and promotion efforts, the proposed licensing fee structure included a sliding scale linked to the size of the fund so, as the fund grew, the influencer would receive a greater percentage of the management fee the fund paid to Van Eck Associates. However, as the SEC’s order finds, Van Eck Associates failed to disclose the influencer’s planned involvement and the sliding scale fee structure to the ETF’s board when seeking approval for the fund’s launch and management fee.

The Investment Company Act

Section 15(c) of the Investment Company Act prohibits all registered investment companies from entering into or renewing “any advisory contract unless the terms of the contract are approved by a majority of the independent directors of the investment company.” As a result, Section 15(c)” imposes a duty on an adviser to furnish, such information as may reasonably be necessary for the directors to evaluate the terms of the adviser’s contract.”

The Commission stated that from December 2020 to June 2021, Van Eck did not disclose key details to the ETF’s board regarding the licensing agreement, the influencer’s participation, compensation, and other related controversies.

“Fund boards rely on advisers to provide accurate disclosures, especially when involving issues that can impact the advisory contract, known as the 15(c) process,” said Andrew Dean, Co-Chief of the Enforcement Division’s Asset Management Unit. “Van Eck Associates’ disclosure failures concerning this high-profile fund launch limited the board’s ability to consider the economic impact of the licensing arrangement and the involvement of a prominent social media influencer as it evaluated Van Eck Associates’ advisory contract for the fund.”

Van Eck Associates consented to the SEC’s finding that it violated the Investment Company Act and Investment Advisers Act. Without admitting or denying the SEC’s findings, Van Eck Associates agreed to a cease-and-desist order and a censure in addition to the $1.75 million fine. end slug


The post Van Eck Settles SEC Claim Over Undisclosed Social Influencer Involvement appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/van-eck-settles-sec-claim-over-undisclosed-social-influencer-involvement/feed/ 0
FinCEN Addresses Beneficial Ownership Reporting For Small Businesses https://compliancechief360.com/fincen-addresses-beneficial-ownership-reporting-for-small-businesses/ https://compliancechief360.com/fincen-addresses-beneficial-ownership-reporting-for-small-businesses/#respond Thu, 15 Feb 2024 17:29:31 +0000 https://compliancechief360.com/?p=3473 The head of the U. S’s Financial Crimes Enforcement Network (FinCEN) announced during a congressional hearing that the Network isn’t adopting a “gotcha” approach to enforcing compliance with the new regulations on reporting beneficial ownership information (BOI) by companies. Since implementing the Anti-Money Laundering Act of 2020, FinCEN’s highest priority has been achieving successful implementation […]

The post FinCEN Addresses Beneficial Ownership Reporting For Small Businesses appeared first on Compliance Chief 360.

]]>
The head of the U. S’s Financial Crimes Enforcement Network (FinCEN) announced during a congressional hearing that the Network isn’t adopting a “gotcha” approach to enforcing compliance with the new regulations on reporting beneficial ownership information (BOI) by companies.

Since implementing the Anti-Money Laundering Act of 2020, FinCEN’s highest priority has been achieving successful implementation of the beneficial ownership reporting requirements. These requirements obligate a company to disclose all individuals who formed the company. This includes any individuals who have a lot of say or control over the company through another unaffiliated company.

Companies are required to disclose the name, date of birth, and home address of every beneficial owner, along with submitting identification like a passport or driver’s license. However, FinCEN exempts certain “large” companies from the BOI reporting obligations, defining them as those with over 20 full-time employees in the U.S. and minimum gross receipts or sales of $5 million, among other criteria.

The purpose of this requirement is to filter out shell companies that are used primarily for money-laundering. These shell corporations usually consist of smaller companies with a lesser amount of financial resources. As a result of the fact that these companies are not usually current with recent regulations, these reporting requirements have received a significant amount of criticism from Congress. During the congressional hearing, Financial Services Chairman Patrick McHenry  referenced a survey conducted by the National Federation of Independent Business, revealing that 90% of small businesses are unaware of their newly imposed reporting obligations.

Andre Gacki, the head of FinCEN, addressed these criticisms in the recent congressional hearing. “I want to clearly state that FinCEN has no interest in hitting small businesses with excessive fines or penalties. The CTA penalizes willful violations of the law, and we are not seeking to take “gotcha” enforcement actions,” Gacki said. “Looking ahead, we will continue our efforts to promote compliance with the reporting requirements and ensure broad awareness of the safe, secure, and easy-to-use filing system.”

FinCEN’s Outreach Efforts

FinCEN has dedicated much time and effort into actively engaging in outreach to smaller companies in order to notify them of the BIO reporting requirements. “We have held outreach events with a wide range of small business advocacy associations, corporate service providers, third party trade associations, industry trade associations, and good governance organizations,” Gacki said her congressional hearing statement. “We have also opened channels to directly engage with small businesses and other users actively filing reports.”

FinCEN’s website also includes a direct link to their Contact Center, so users can submit their questions about filing or let the agency know of any issues they encounter with submitting their report. It is also using a ChatBot to provide businesses with an interactive tool to quickly answer any questions they may have.

Although small companies are inherently not aware of the current BOI reporting requirements, FinCEN is trying its best to notify each and every company of such rules in order to truly filter out those that have only been formed for the purpose of money-laundering and other financial crimes.

As the head of FinCEN said to the House of Representatives in the recent congressional hearing, “We know that the vast majority of the small businesses that will be impacted by this reporting requirement are law-abiding businesses that want to do the right thing, and we also know that many of them may not be familiar with FinCEN… This is why outreach has been and will continue to be a primary focus of our efforts.”

FinCEN now requires companies to complete their BOI requirements by January 1, 2025. For those who violate the requirements, they will face “civil fines of up to $500 per day that the violation continues, criminal fines of up to $10,000, and up to two years of imprisonment “according to FinCEN.   end slug


Jacob Horowitz is a contributing editor at Compliance Chief 360°

The post FinCEN Addresses Beneficial Ownership Reporting For Small Businesses appeared first on Compliance Chief 360.

]]>
https://compliancechief360.com/fincen-addresses-beneficial-ownership-reporting-for-small-businesses/feed/ 0