Artificial intelligence has ascended to the forefront of workplace policy concerns for U.S. employers, eclipsing issues such as immigration and diversity, equity, and inclusion (DEI). A significant 84% of employers anticipate substantial business impacts stemming from AI-related regulatory changes within the next twelve months. This figure represents a doubling of the proportion that held this view in 2025, according to new research from Littler, a prominent employment and labor law firm. The firm’s 14th annual employer survey, which gathered insights from over 300 C-suite executives, in-house legal counsel, and HR professionals across the United States, underscores a rapidly evolving landscape of employer priorities and anxieties.
This seismic shift in employer focus highlights a critical governance gap that could leave organizations vulnerable. While the adoption of formal AI policies has seen a dramatic increase, with 68% of respondents now reporting such policies—a substantial jump from 38% in 2025—the implementation of robust oversight mechanisms lags behind. Barely more than half of surveyed organizations have established formal review or approval processes for AI tools (55%) or have implemented restrictions on the types of information employees can input into these systems (54%). This disparity raises concerns about potential misuse, data breaches, and inadvertent biases embedded within AI applications.
The specter of AI-related litigation looms large, with nearly four in five respondents (79%) expressing apprehension. The primary drivers of this concern are data privacy (49%), the potential for discrimination or bias (45%), and the challenge of ensuring compliance with a growing patchwork of state and local AI laws (43%). As AI technologies become more integrated into business operations, employers face the complex task of navigating uncharted legal and ethical territories, demanding proactive strategies to mitigate emerging risks.
The Widening Governance Gap in AI Adoption
The Littler survey’s findings paint a picture of rapid AI adoption outpacing the development of comprehensive governance frameworks. While the initial rush to implement AI tools is understandable, driven by promises of efficiency and innovation, the lack of established protocols for review, approval, and data input restrictions suggests a reactive rather than proactive approach to risk management. This governance deficit is particularly concerning given the sensitive nature of data often processed by AI systems.
The projected regulatory changes anticipated by 84% of employers indicate an impending period of significant adaptation. Governments worldwide are grappling with how to regulate AI, focusing on areas such as algorithmic transparency, data protection, and the prevention of harmful biases. Employers that have not yet solidified their internal AI governance structures may find themselves scrambling to comply with new mandates, potentially facing penalties and reputational damage.
The leading concerns—data privacy, discrimination, and compliance with emerging laws—are interconnected. The collection and processing of vast amounts of data by AI systems inherently raise privacy concerns. If this data is biased or used in discriminatory ways, it can lead to legal challenges and erode public trust. Furthermore, the fragmented nature of AI regulation at the state and local levels in the U.S. adds another layer of complexity, requiring employers to monitor and adhere to a diverse set of rules that may conflict or overlap.
Employee Concerns: Fear and Ethical Dilemmas Persist
Beyond the realm of AI, a separate survey by Outten & Golden reveals a deeply ingrained issue of fear within the workplace, impacting employees’ willingness to report misconduct. A startling one in three employees indicated they would refrain from reporting workplace wrongdoing due to fear of retaliation. This pervasive apprehension not only stifles internal accountability but also suggests a significant disconnect between stated organizational values and the lived experiences of employees.
The Outten & Golden survey, which polled over 1,000 Americans, also found that 22% of respondents had personally witnessed unethical or illegal conduct at work. This high prevalence of observed misconduct, coupled with the reluctance to report it, creates a fertile ground for systemic problems to fester. The fear of retaliation, whether perceived or actual, acts as a powerful deterrent, shielding problematic behaviors and potentially emboldening those who engage in them.
A critical factor contributing to this reluctance is a widespread lack of awareness regarding whistleblower protections. More than 40% of respondents admitted to being unaware of government whistleblower programs that offer confidentiality, legal protection, and financial incentives for reporting wrongdoing. This knowledge gap leaves employees feeling unprotected and vulnerable, further exacerbating their fear of reprisal. Educating employees about these protections is crucial for fostering a culture where reporting misconduct is seen as a safe and valued act.
Adding to the ethical concerns, 21% of employees reported feeling pressured to compromise their ethical standards at work. This pressure disproportionately affects men, with 26% of male respondents experiencing such pressure, compared to their female counterparts. This finding suggests that ethical compromises may be more prevalent than openly acknowledged, driven by internal or external pressures that challenge employees’ integrity.
Furthermore, a significant portion of the workforce harbors skepticism about their employers’ transparency. Thirteen percent of employees do not believe their employers communicate honestly and openly. This distrust tends to increase with age, indicating that more experienced employees may have developed a greater awareness of corporate communication tactics and are more attuned to potential disingenuousness. This perception of a lack of transparency can erode employee morale and loyalty.
Interestingly, the Outten & Golden survey also touched upon employee sentiment regarding DEI initiatives. A substantial majority, nearly three-quarters (73%), believe that DEI should be a workplace priority. However, a notable disconnect persists between employee aspirations and organizational practices. Just under 30% of employees stated that their employer does not treat DEI as a priority, suggesting that while the importance of DEI is widely recognized, its implementation and integration into daily operations may be falling short of employee expectations.
Boardroom Engagement and Security Imperatives
In a related development, corporate boards of directors are demonstrating a heightened focus on physical security and executive protection. According to a survey conducted by EY, a global business and government consultancy, 87% of corporate decision-makers reported that boards have become more actively involved in these areas over the past twelve to eighteen months. This increased board oversight reflects a growing recognition of the multifaceted security risks that organizations face, extending beyond traditional cybersecurity threats.
The surge in board engagement is likely a response to a more volatile geopolitical and social landscape, as well as an acknowledgment of the potential for sophisticated physical attacks or targeted disruptions. In tandem with this increased oversight, nearly eight in ten (79%) of companies surveyed reported an increase in their security budgets compared to 2024. Despite these increased investments, a concerning statistic emerged: only 12% of respondents indicated that their organizations were fully prepared to detect a targeted attack against an executive or employee. This suggests that while security is a growing priority, preparedness remains a significant challenge.
Organizations are also proactively addressing the evolving threat landscape posed by cyber-attacks and the misuse of AI. Nearly 60% of executives and board leaders have engaged in pressure-testing scenarios involving AI’s potential to bypass physical security controls or create deepfake impersonations of executives to gain unauthorized access. This forward-thinking approach to identifying and mitigating novel threats is crucial in an era where the lines between physical and digital security are increasingly blurred. The EY survey involved 250 leaders who influence executive protection, physical security programs, budget, and policy, providing a robust perspective on boardroom priorities in this domain.
The Peril of Poor Data Quality in the Age of AI
The reliability of data underpinning critical business decisions is facing significant scrutiny, particularly as organizations accelerate their adoption of artificial intelligence. A survey by OneStream, an enterprise finance software company, revealed that nearly half of senior finance and IT executives (47%) made a material business decision based on inaccurate, incomplete, or outdated financial data within the past year. This pervasive issue of data quality has tangible and costly consequences.
The financial ramifications are substantial. Among executives who experienced decisions based on flawed data, 72% reported that it cost their organization $500,000 or more, with a striking 37% indicating damages exceeding $1 million. The downstream effects of such decisions are far-reaching, including delayed financial reporting and closing processes (44%), missed revenue opportunities (41%), and significant compliance issues (35%). These findings underscore the fundamental importance of data integrity for sound financial management and operational efficiency.
The risk appears to be compounding as AI use grows. Executives who have made decisions based on bad data are four times more likely to utilize ten or more AI tools than their peers. This suggests that a heavier reliance on AI, without adequate attention to underlying data quality, can amplify existing data problems and lead to even more flawed outcomes. Furthermore, 95% of these executives express concerns about AI-related risks, including flawed decision-making (37%) and financial misreporting (20%). This creates a precarious situation where the very tools intended to improve decision-making are being fed unreliable information, leading to potentially disastrous results.
A generational dimension adds nuance to these findings. Younger executives, aged 25 to 44, tend to use AI more extensively—82% employ three or more AI tools for decision-making, compared to 69% of their more experienced counterparts. However, they are also more exposed to the risks associated with poor data quality, with 51% reporting making material decisions on faulty data, versus 39% of older leaders. Moreover, they are more than four times as likely to report significant financial or compliance impacts as a result (17% versus 4%). This highlights a critical need for robust data governance and AI literacy programs tailored to different age groups within organizations.
Tax Compliance: A Growing Burden Outpacing Organizational Capacity
The complexity and sheer volume of tax compliance mandates are increasingly overwhelming finance leaders, according to a new survey by Sovos, a tax compliance software company. More than half of senior finance executives (58%) describe domestic and global tax compliance requirements as complex for their organizations. Compounding this challenge, 44% believe that regulations are changing too rapidly for them to manage effectively. The survey, which polled 300 finance leaders at companies with revenues of $500 million or more across the financial services, manufacturing, and retail sectors, indicates that compliance is becoming a significant constraint on broader business strategy.
The relentless pace of regulatory change has emerged as a primary concern. Sixty-one percent of respondents identified the speed of new government mandates as the most significant compliance risk over the next two to three years, eclipsing concerns about increasing complexity of global operations (56%) and rising costs of compliance technology and staffing (42%). This regulatory velocity is not merely an administrative burden; it actively prevents organizations from pursuing strategic growth opportunities. Three-quarters of respondents reported that compliance limitations hinder their ability to be more strategic in business decisions, including geographic expansion and new product launches.
While AI is being considered as a potential solution to streamline tax compliance, its adoption is accompanied by significant anxiety, particularly regarding data security. Nearly 40% of respondents plan to evaluate or implement AI-driven tax compliance tools in the next fiscal year. However, a striking 86% expressed extreme or very high concern about data security when utilizing AI-enhanced compliance solutions. This highlights a critical need for robust security protocols and transparent data handling practices for any AI-powered compliance tools entering the market. The challenge for organizations is to harness the potential of AI for efficiency without compromising the confidentiality and integrity of sensitive financial data.
The confluence of these survey findings paints a comprehensive picture of the evolving challenges facing modern organizations. From the transformative impact of AI and the persistent issues of employee trust and ethical conduct to the growing demands on board oversight and the complexities of regulatory compliance, businesses are navigating an increasingly intricate landscape. Proactive strategies, robust governance, and a commitment to ethical practices are no longer optional but essential for sustained success and resilience.
