Article body

Introduction

The widespread adoption of artificial intelligence (AI) in managing workers’ performance increased the use of algorithms for hiring, monitoring, scoring, disciplining, and terminating employees across all company sizes and worker categories, including full-time, app-based, part-time, home-based or remote employees, and supply chain workers.[1] This transformation of work has improved companies’ productivity, reduced labour costs, and heightened shareholder value, while creating new job opportunities. However, concerns have been raised about the extent to which algorithmic management systems (AMS) exacerbate bias, discrimination, workplace surveillance, excessive productivity demands, worker performance assessment, unfair working conditions and economic exploitation, mental health issues and subsequent unproductivity, worker privacy invasion, power asymmetries and workers’ inability to bargain over their data rights, job displacement, layoffs,[2] and de-unionization.[3] The proliferation of real-world examples demonstrates the detrimental impact that workplace algorithms are having on workers.[4] These problems undermine workers’ well-being and companies’ performance, likely worsening societal inequalities.[5]

Regulatory action is necessary to address these concerns. Ex-ante human rights impact assessment of AMS is critical and has been widely adopted. These assessments, conducted before deployment, can identify potential risks, violations of worker rights, and other harmful outcomes. Early identification allows companies to mitigate these risks, reducing the likelihood of costly legal battles and ex-post regulatory interventions. While AMS impact multiple stakeholders, the shared ownership of workplace data, and the need to enhance AMS assessments’ quality and legitimacy may justify the adoption of a collective or multi-stakeholder governance of AMS assessments, many countries have not embraced such an approach. In countries adhering to a shareholder primacy tradition, workers are frequently excluded from the governance of AMS assessments. Even in jurisdictions with stakeholder-oriented corporate governance models where workers are allowed to participate in AMS governance, corporate resistance can be a significant hurdle to such involvement. It is ideal that companies, their directors, workers, and regulators collaborate to conduct AMS assessments. However, several obstacles hinder this collective governance. These hurdles include the prioritization of AI risks over workers’ human rights, the limitations of workers’ participation, and the potential opposition from directors and shareholders.

This paper argues that consideration should be given to expand directors’ and officers’ duties with obligations to collaborate with the ex-ante AMS assessment process and its collective governance, including, facilitating meaningful worker involvement. The directors’ collaborative duties may remove significant barriers to the collective governance of AMS assessments by obligating disclosure, coordination, negotiation, and rectification of algorithms to serve the interests of companies and multiple stakeholders,[6] including safeguarding workers’ human rights, such as equality, non-discrimination, health, and safety. The effectiveness of this multi-stakeholder governance of AMS assessments requires directors’ collaborative duties, which prove costly in the short-term, but the long-term benefits for companies and society are substantial. Such duties may contribute to enhancing the long-term sustainability of companies and society by building efficient, equitable, and sustainable AMS, fostering an adaptable and productive workforce, encouraging innovation, and promoting efficient management in the digital economy. Ultimately, this will also maximize shareholder value in the medium and long term.

This paper’s first section elaborates on the collective or multi-stakeholder governance of AMS assessments, reviewing current regulatory interventions and highlighting the advantages and disadvantages of such governance approach. The second section reviews directors’ current duties regarding workers’ interests and AMS, discusses corporate opposition to the collective governance of AMS assessments, and articulates the reasons for expanding directorial duties.

I. Collective Governance of the Impact Assessment of Algorithmic Management Systems

A pivotal legal strategy aimed at mitigating the adverse impacts of AMS involves mandating ex-ante impact assessments of such systems.[7] This is consistent with a “secure by design” approach to AI development that is emerging as a key principle guiding the development of safe and fair AI systems.[8] These evaluations endeavor to pre-emptively identify and rectify risks, harms, or infringements on workers’ rights and well-being prior to algorithm deployment,[9] while improving companies’ performance. This early intervention can bring important benefits. These include systemic and comprehensive evaluation of algorithms beyond individual cases, significant prevention of harm to workers that can sometimes be irreversible, algorithm transparency that can foster worker engagement and oversight, avoidance of business operation disruptions due to harmful workplace algorithms, and savings in monitoring, litigation, and regulatory costs for companies, their workers, and regulators as AI risks are dealt with ex-ante. Impact assessments can mitigate the potential failures of governments and regulatory agencies that may not enforce algorithmic management regulations effectively due to a lack of political will, technical incompetence, underfunding, or corruption. These assessments may be guided by human rights protection principles[10] and are often disclosed for public accountability.

The governance of AMS impact assessments varies significantly across different corporate governance models. In countries with shareholder-oriented models, worker involvement in AMS governance is often minimal or non-existent. Conversely, stakeholder-oriented models tend to encourage and facilitate worker participation in these processes. The contrast between shareholder and stakeholder models in AMS governance highlights the broader implications of corporate governance structures on designing, implementing, and overseeing workplace algorithms. This divergence explains the varied regulatory approaches countries are adopting for AMS governance. As AMS continues to reshape workplace dynamics, the choice of AMS governance model becomes increasingly crucial. It influences how workplace algorithms are developed and deployed and how their impacts are measured and mitigated. This phenomenon calls for ongoing evaluations of regulatory frameworks to ensure they adequately address the complex challenges posed by algorithmic management in diverse corporate and cultural contexts.

A. Shareholder-Centric Governance of Algorithmic Management Systems and its Limitations

Recent legislative developments require companies to conduct impact assessments of their AMS, coupled with disclosure obligations and human rights considerations.[11] Notably, the emphasis on algorithm disclosure has emerged as a common policy response to mitigate risks and abuses associated with algorithmic management. For instance, in Ontario, employers are obliged to furnish employees with their written policy pertaining to electronic monitoring.[12] This policy must indicate the methods and circumstances for monitoring employees and the purposes of employers’ use of any information obtained through it, although their use is not limited to the stated purposes. The disclosure is limited to electronic monitoring, and employees are only allowed to complain about a contravention of their employers’ obligation to disclose their written electronic monitoring policies. New legislation requires employers to disclose the use of AI in hiring.[13] Employers thus retain substantial control over algorithmic systems and workplace data, significantly curtailing employees’ voice and oversight.

The US appears to be moving in the same direction as Ontario. The General Counsel of the National Labor Relations Board (NLRB) has urged the NLRB to protect employees from employers’ abuse of technology.[14] Under this new framework, an employer would be presumed to violate the National Labor Relations Act if their surveillance and management practices, viewed as a whole, could interfere with or prevent employees from engaging in protected activities.[15] Subject to exceptions, if the employer’s business needs outweighs employees’ Section 7 rights, the employer could be required “to disclose to employees the technologies it uses to monitor and manage them, its reasons for doing so, and how it is using the information it obtains.”[16] The rationale behind this approach is that only with this information employees can effectively exercise their Section 7 rights and take appropriate measures to protect the confidentiality of their protected activities if they choose to do so.[17] Some US states have begun to embrace a similar approach to regulating AI in workplaces. For instance, New York’s Assembly Bill A9315A would require employers engaging in electronic monitoring or automated employment decisions to screen a candidate or employee to conduct impact assessments of such AI tools.[18] Employment candidates would be informed of the use of such tools, revealing the emphasis on disclosure as the dominant legal intervention.[19] At the US federal level, the Stop Spying Bosses Act would require disclosures and prohibit employers from engaging in surveillance of workers.[20] Additionally, in 2023, President Biden issued an executive order recognizing the value of integrating workers’ views in regulating the use of AI in workplaces and the imperative of supporting workers’ ability to bargain collectively to mitigate AI risks.[21]

Furthermore, some jurisdictions requiring companies to conduct impact assessments of their AMS do not mandate consultation with, or involvement of, workers. For instance, in Canada, the Artificial Intelligence and Data Act (AIDA) is currently under discussion in Parliament and would require companies to conduct impact assessment informed by human rights considerations and disclosure of AI systems, including workplace algorithms.[22] While the regulator would supervise high-impact AI systems[23] by requiring disclosure, auditing, changes, or cessation of harmful systems, consultation with stakeholders, especially workers, are not required despite growing calls for workers’ participation in the regulation and oversight of AI systems.[24] The proposed Consumer Privacy Protection Act would also require employers to conduct some assessment of their data systems, disclose collected information, and provide some explanation upon request.[25] A similar approach is found in Quebec’s data protection law that requires privacy impact assessment and disclosure to the data subject with some opportunity for the latter to submit observations.[26]

Canada’s adopted approach to AMS governance largely reflects the shareholder-oriented nature of its corporate governance model. Despite amendments to Canada’s shareholder primacy tradition, which introduced non-binding considerations of stakeholders’ interests,[27] the practical impact on algorithmic management governance appears minimal. The persistence of companies’ control of AMS and the significant disregard of workers’ participation in AMS governance suggest that the introduction of stakeholder considerations has not fundamentally altered the power dynamics in Canadian corporate governance, particularly regarding AMS. Overall, this approach emphasizes ex-ante impact assessments and disclosure of AI systems with some ex-post rights of action to address violations of workers’ rights.

This government-supervised, company-driven model of algorithmic management governance, however, raises significant concerns regarding corporate accountability. Under this framework, companies retain substantial control over such systems and their assessments, compromising transparency and disregarding the involvement of workers in governance processes. This approach aligns with liberal market economies that have adopted shareholder-oriented corporate governance models that often exclude workers from the governance of companies. The disclosure of workplace algorithms and their assessment may not undergo sufficient scrutiny by stakeholders, particularly workers, who often lack the resources and means to effectively monitor companies’ AI deployment. Consequently, they face challenges in overseeing AMS and holding companies and their directors accountable for AI-related failures or abuses, even when workplace algorithm disclosures are in place. This predicament stems from workers’ knowledge and resource constraints, time limitations, and fear of retaliation, including termination for speaking out. It is thus necessary to explore alternative regulatory interventions beyond mere algorithm disclosures and company-controlled impact assessments of workplace algorithms.[28]

While the governance of AMS has been shareholder-centric and company-driven, it is crucial to recognize the growing involvement of workers and their unions in this process, particularly through collective bargaining.[29] These stakeholders are increasingly asserting their influence on corporate policies concerning AI implementation in the workplace, with the objectives of safeguarding workers from AI-related risks and preserving their fundamental rights.

Workers and unions are not merely passive recipients of AI-driven changes but are emerging as active participants in the decision-making process. Their involvement serves as a critical counterbalance to profit-driven implementations of AI technologies, ensuring that ethical considerations and worker well-being are factored into AMS design and deployment. This evolving dynamic highlights the importance of fostering a collaborative approach to AI governance in the workplace. It underscores the need for companies to engage in meaningful dialogue with their workforce, recognizing that effective and responsible AMS implementation requires a delicate balance between technological innovation, operational efficiency, and the protection of worker interests. However, unions face significant challenges in their efforts to engage with companies in governing AMS, including an unfriendly legal framework that may not facilitate worker involvement, corporate resistance, and the rapid pace of technological changes.[30] Additionally, unions have limited opportunities to renegotiate collective agreements to specifically address AI-related risks.[31] The situation is even more precarious for non-unionized workers, who have considerably less leverage to engage in negotiations with their employers regarding AMS. As a result, there is still a significant risk that AMS implementation may proceed without sufficient consideration of workers’ interests and concerns.

B. Collective Governance of Algorithmic Management Systems

A collective or multi-stakeholder governance of ex-ante impact assessments of AMS can serve as an effective regulatory mechanism. By involving diverse stakeholders—such as workers, employers, regulatory bodies, technology experts, and civil society—this mechanism can ensure a more balanced evaluation of the effects and implications of workplace algorithms.[32] This inclusive approach represents a departure from shareholder primacy models of AMS governance and aligns instead with stakeholder-friendly approaches that can be better suited to managing the complexities of workplace algorithms.

Involving workers who are directly affected by these algorithms is crucial. This involvement, whether through consultation, participation, or collective bargaining, helps protect workers’ rights, including privacy and AI transparency, safeguards their well-being and can assist companies in tackling technological changes associated with AMS.[33] Moreover, integrating diverse perspectives enhances the quality and legitimacy of AMS assessments by incorporating competing interests and fostering comprehensive evaluations of potential risks and benefits. Collaborative governance can improve the implementation and acceptance of workplace algorithms, ultimately benefiting companies and society. This inclusive approach addresses immediate concerns about AMS and contributes to the long-term sustainability and social responsibility of technological innovations in the workplace.

Aligned with their long-standing stakeholder model of corporate governance, European countries established requirements for worker consultation in the oversight of AMS and their assessments. Europe’s General Data Protection Regulation (GDPR)[34] and the Platform Work Directive mandate the disclosure and the impact assessment of AI systems, alongside consultation with, and participation of, data subjects and workers, including collective bargaining.[35] This Directive applies to platform workers only, not to all sectors. The EU’s Artificial Intelligence Act (EU AI Act) requires deployers to perform ex-ante fundamental rights impact assessment of high-risk AI systems, including those related to employment.[36] Disclosure obligations are also imposed on deployers with some rights granted to affected persons to request an explanation of the use of AI systems.[37] An advisory forum representing stakeholders is established to advise the regulator.[38] It appears that workers and unions would be represented in such a forum only and would provide expertise and advice to the regulator. EU AI Act’s lack of worker involvement requirement in ex-ante assessment of AMS risks undermines more protective national legislations mandating workers’ participation. Notably, recent legal reforms in Germany have bestowed upon workers the right to be consulted on the deployment of artificial intelligence in workplaces.[39] On the other hand, similar legislative initiatives in the US, such as California’s Bill 1651, echo this approach.[40]

There have been some successful cases of collective bargaining through which workers co-manage AMS and include AI-related rights, such as access to algorithm information (including risk and impact assessment information, data management, and mitigating measures), worker profiling bans, non-monitoring of worker-union communications, and joint commissions to manage workplace algorithms.[41] However, companies largely persist in maintaining significant control over the governance of AMS, undermining the effectiveness of collective oversight and social dialogue between companies and their employees. Companies are increasingly reluctant to disclose and negotiate their AMS with workers, in defiance of legal mandates,[42] hindering workers’ involvement in overseeing workplace algorithms and their impact assessment.

Despite its advantages, collective or multi-stakeholder governance faces challenges such as the need for transparency, potential reluctance from companies to disclose algorithmic details, and inherent power imbalances among stakeholders. Transparent processes and AMS disclosure are essential for building trust and cooperation and for facilitating successful AI-related collective bargaining, but companies may resist due to compliance and negotiation costs, loss of control, and concerns over trade secrets and cybersecurity.[43] Precarious, low-income, and minority workers are particularly vulnerable to these problems.

Addressing these corporate barriers to the collective governance of AMS and their ex-ante assessment appear to be crucial in ensuring their effectiveness. It is thus critical to explore the merits of further legal intervention. One alternative pertains to augmenting the duties of companies’ directors and officers regarding ex-ante impact assessments of AMS and workers’ involvement. Expanded duties of directors and officers could compel them to not only mitigate risks, harms, or human rights violations associated with AMS, but also to facilitate the collective governance of such systems, including collaboration with workers to assess such systems.

II. Towards the Expansion of Directors’ Duties to Enhance the Collective Governance of Algorithmic Management Assessments

A. Directors’ Duty of Loyalty and Duty of Care and Workers’ Interests

In liberal market economies with a shareholder primacy tradition, directors primarily focused on maximizing shareholder value. The massive deployment of artificial intelligence in business activities is transforming companies’ governance and operations. Among these changes, the integration of algorithms in the workplace presents new challenges to directorial responsibilities, particularly concerning their duties towards workers. Thus, it has become imperative to determine directors’ duty of loyalty and duty of care towards workers in the realm of algorithmic management, especially with respect to the impact assessments of workplace algorithms.

The duty of loyalty obligates directors to act in the best interests of the corporation. In countries with a shareholder primacy tradition, this has often emphasized shareholder value maximization. However, across several jurisdictions, directors’ duty of loyalty is evolving to encompass non-shareholders’ interests, including consideration of workers’ interests.[44] Thus, they are expected to integrate the interests of workers in designing, developing, and implementing workplace algorithms while advancing corporate goals. Nevertheless, this duty to consider workers’ interest is often not binding,[45] leading directors to either overlook or accord it only little consideration. Therefore, it seems unlikely that, in fulfilling their duty of loyalty, directors and officers will favor algorithm assessments integrating workers’ interests or collaborating with their representatives to engage in a collective governance of workplace algorithms.

The duty of care mandates that directors and officers act with reasonable prudence and diligence in managing their companies. In Canada, directors and officers, in exercising their powers and discharging their duties, shall “exercise the care, diligence and skill that a reasonably prudent person would exercise in comparable circumstances.”[46] This duty of care should be assessed against an objective standard of a prudent person.[47] Accordingly, directors and officers may need to conduct assessments, monitoring, and mitigation of harmful risks associated with companies’ operations.[48] Regarding algorithmic management, directors would be expected to oversee workplace algorithms to ensure that they serve companies’ interests and avoid risks[49] and harms to stakeholders, particularly workers as their exposure to these risks is widely recognized. Directors may thus be required to identify potential risks associated with AMS and take proactive measures to mitigate them. Failure to do so could result in liability.[50] They may need to ensure that workplace algorithms are transparent, non-discriminatory,[51] and do not compromise workers’ rights or safety. In satisfying their obligations, directors may take appropriate measures for assessing, monitoring, disclosing, and auditing workplace algorithms, such as, appointing committees, directors or officers with AI expertise to conduct impact assessments of workplace algorithms at the design or development stages with some worker consultation.

The development of corporate sustainability laws and policies, which inter alia require companies to protect workers’ human rights, suggests that directors’ duty of care to safeguard workers’ rights related to algorithm risks may also extend to supply chain workers.[52] However, several factors may favor directors prioritizing shareholder value over workers’ interests. Prioritizing shareholder value maximization, combined with the business judgment rule that grants directors a degree of discretion, that may incentivize them to adopt cost-effective, profit-driven risk-mitigation measures for AMS, even if these measures compromise worker well-being or violate their rights. Directors may prioritize short-term financial gains over long-term benefits associated with ethical AMS, especially if immediate profits appear to conflict with workers’ interests. Companies and their directors may perceive necessary actions like algorithm transparency, third-party assessments, and worker negotiations as costly, counter-productive, and detrimental to their competitive advantage, innovation, or profit-maximization goals.[53] Instead, they may engage in minimal, symbolic, or deceptive compliance regarding AMS, including providing the bare minimum of information or even misleading disclosure about workplace algorithms.[54] Moreover, the duty of care obligates directors to safeguard their companies’ cybersecurity and trade secrets, which may justify avoiding full or substantial algorithm disclosure,[55] hindering cooperation, and negotiations with workers. The lack of transparency and engagement ultimately hampers attempts at collective governance of AMS, including impact assessments thereof. Directors’ potential lack of AI expertise and the current limitations of AI knowledge in the face of complex algorithms can further discourage them from actively overseeing AMS. Thus, addressing the tension between shareholder value and workers’ interests when deploying AMS requires a reassessment of directors’ duties and the development of new governance mechanisms that promote transparency, accountability, and meaningful worker engagement while enhancing company performance.

Although the legal requirement for directors to directly negotiate workplace algorithms with workers remains unsettled, emerging laws and regulations are increasingly mandating companies to involve workers in the assessment, disclosure, and even the design of AMS.[56] This trend may signify a shift in directors’ duties, requiring companies to integrate worker consultation throughout the AMS cycle. Thus, the right of workers to participate in the design, evaluation, and monitoring of AMS is gaining traction, including workers’ involvement in impact assessments of AMS and the right of AI experts assistance during these assessments.[57] This trend cuts across jurisdictions adhering to either shareholder primacy or stakeholder models of corporate governance, despite their different approaches to AMS. These developments highlight the growing need for directors to manage workplace algorithms in collaboration with workers.

B. Corporate Reluctance to AMS Disclosure and Negotiations

Even in the presence of laws or collective bargaining agreements requiring worker consultation or coordination when deploying AMS, it remains unclear whether companies and their directors may comply with such emerging laws or engage in symbolic or circumventive compliance, particularly when directors’ new algorithm-related obligations are not clearly established. For example, a recent court ruling concerning Uber’s algorithm disclosure obligations highlights the significant resistance displayed by companies against such disclosures, impact assessments, and collective negotiations on workplace algorithms. In October 2023, the District Court of Amsterdam ruled that Uber failed to comply with the April 2023 Court of Appeal order that Uber must provide algorithmic transparency regarding the automated decision to dismiss drivers from the UK and Portugal.[58] Uber deactivated several drivers due to alleged repeated trip cancellation fraud. The case was brought by Worker Info Exchange (WIE) in support of the App Drivers and Couriers Union (ADCU). The court held that drivers were entitled to know information about the automated dismissal decision, including the factors that were considered in the algorithmic decision and worker profiling data to assess and reasonably challenge such decisions, if they wish to do so. However, Uber provided limited and useless information, failing to disclose the legally required information despite the drivers’ request. The Amsterdam District Court questioned whether Uber made a genuine effort to comply with the Court of Appeal order and if it was deliberately withholding information from workers to the advantage of its business interests.[59] Uber cited the need to safeguard its security and trade secrets for its reluctance to comply.[60] However, the District Court indicated that “[t]o the extent Uber relies on protection of its trade secrets, this (also) cannot benefit it, where it does not provide any information at all. This does not square with what the court of appeal has considered in par. 3.28.”[61] Similarly, in early 2022, Amazon warehouse workers from Germany, UK, Italy, Poland, and Slovakia challenged the company’s AI opacity and filed access requests under Article 15 of the GDPR, demanding worker data transparency. UNI Global Union, the international federation of service unions and privacy organization noyb supported the workers’ action. Workers complained that Amazon collected their data from multiple sources to feed their algorithms and then monitored and fired them while invading their privacy without their knowledge.[62]

Similarly, in Spain, companies are reluctant to disclose their workplace algorithms to workers despite their obligations to do so.[63] The 2021 Rider Law requires all Spanish companies to inform their employees about their use of algorithms in workplaces and the extent to which AI impacts their working conditions and labor rights.[64] However, very few companies comply with this mandate and, even when they do, algorithm disclosure is often minimal.[65] Companies frequently justify their non-compliance or poor compliance by citing the protection of their trade secrets and intellectual property rights, or the irrelevance of AMS disclosure.[66] This issue is further exacerbated by Spanish workers’ limited ability to request relevant algorithm information and their lack of capacity to process and utilize this information to safeguard themselves against AI risks and improve their working conditions.[67] These issues remain unregulated.[68] Although the government issued a workplace algorithm transparency guide, it has had little impact due to its non-binding nature and widespread disuse.[69] Thus, Spain’s social dialogue over workplace algorithms has largely not materialised.

Similar corporate reluctance to algorithmic transparency is occurring across different jurisdictions. In Canada, where the corporate governance model incorporates some stakeholder considerations, employers are allegedly rushing to deploy AI in workplaces without engaging in coordination or negotiations with workers while unions are moving slowly to respond to AI challenges.[70] Canadian companies are reportedly not being transparent about their use and capabilities of AMS.[71] This apparent opacity raises concerns about the potential negative impact of AMS on workers and their inability to engage in oversight and negotiations with their employers.

In the US, companies are increasingly reluctant to disclose their algorithms, data, and AI risks. Among similar proposals by other shareholders, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), an investment trust for union members, filed AGM proposals demanding greater transparency on AI at Apple, Comcast, Disney, Netflix, and Warner Bros.[72] AFL-CIO also filed a proposal at Amazon demanding that a new committee of independent directors on AI be established to address human rights issues.[73] In articulating its proposal at Apple, AFL-CIO raised concerns about the impact of AI on employees, including discrimination, bias, and mass layoffs.[74] In 2023, some of AFL-CIO’s efforts were also devoted to ask companies to commit to respecting freedom of association and collective bargaining.[75] Disney and Apple engaged with the US Securities and Exchange Commission (SEC) in an unsuccessful attempt to remove the AFL-CIO’s AI proposal from AGM agendas.[76] Although important proxy advisors and investors supported AFL-CIO’s Apple proposal, it failed on 28 February 2024,[77] whereas the proposals at Comcast and Disney were withdrawn by AFL-CIO after the companies agreed to disclose more AI-related information.[78] Similarly, pension funds have supported unsuccessful AI disclosure proposals initiated by other shareholders. For instance, Arjuna Capital filed an AI risk disclosure proposal at Microsoft, and NBIM, the Office of the New York City Comptroller, which manages the city’s five pension funds, Californian public pension fund CalSTR, and Dutch pension fund manager PGGM supported the proposal.[79] It, however, failed as it ultimately received 21.2% of the vote.[80]

Entertainment companies have also shown reluctance to negotiate the use of AI with writers’ unions. In May 2023, members of the Writers Guild of America (WGA) went on strike against movie and television producers, with one of their key demands being a ban on companies using AI for story pitches and scripts.[81] The union expressed concerns that AI could be employed to generate initial drafts, potentially reducing the number of writers needed for script development.[82] However, studios hiring the writers did not want to negotiate hard limits on the use of AI, instead proposed holding regular meetings with writers to discuss technological advancements.[83] While the WGA ultimately secured significant protections against AI in their new contract, including provisions that AI cannot be credited as a writer or create source material, the conflict was resolved after a prolonged strike.

In October 2023, the SEC charged SolarWinds Corp. and its chief information security officer (CISO) for defrauding “… investors by overstating SolarWinds’ cybersecurity practices and understating or failing to disclose known risks” associated with a two-year long cyberattack.[84] The company and the officer “… engaged in a campaign to paint a false picture of the company’s cyber controls environment…”[85] The complaint focused on Orion, the company’s flagship network monitoring product, that “was used by virtually all Fortune 500 companies and many US government agencies.”[86] Numerous corporate officers, cybersecurity organizations, and other business groups opposed the SEC action against SolarWinds and its CISO, citing concerns about excessive regulatory intervention, greater exposure to cybersecurity attacks, and discouraging CISOs from undertaking their jobs for fear of liability.[87]

Companies’ lack of AI transparency is prompting concerned shareholders to file proposals requesting that companies report on their use, oversight, and risks associated with AI.[88] While these cases do not necessarily involve workplace algorithms and workers’ interests, they illustrate companies’ increasing failures and resistance to algorithmic transparency even when requested by shareholders or required by law. These problems are likely to increase in shareholder primacy contexts as there are less legal requirements for collectively governing workplace algorithms unlike in stakeholder-oriented jurisdictions.

Overall, these examples demonstrate a growing trend of corporate reluctance to disclose companies’ algorithms and engage in meaningful negotiations and coordination with workers to collectively govern AMS. Companies seem intent on exclusively controlling workplace algorithms and data to maintain their power over workers.[89] This issue is exacerbated by the growing concentration of corporate ownership and the deterioration of workers’ well-being and union activity. Workers often find themselves in precarious positions, with limited ability to understand AMS and to access their data. This limitation, coupled with diminished collective bargaining power[90] and companies’ anti-union practices, further contributes to corporate and directorial reluctance to comply with and collaborate on the collective governance of workplace algorithms.

Even in the presence of new laws and regulations mandating companies and their directors to participate in impact assessments and disclose their workplace algorithms, their compliance may be modest, symbolic, or circumventive. Strengthening the obligations of directors in this context could prove beneficial.

C. Directors’ Collaborative Duties, Workers’ Rights, and Collective Governance of Workplace Algorithm Assessment

In light of the foregoing analysis, it is imperative to consider expanding the duties of directors and officers to ensure their cooperation with the impact assessments of AMS and their collective governance. While their duties associated with the deployment of algorithms and workers’ interests are not clearly established, companies and their directors and officers are increasingly compelled to engage in dialogue with stakeholders, notably workers, regarding the deployment and governance of AMS, including the ex-ante assessment thereof. This mandate stems from the confluence of directors’ emerging duty to consider employees’ interests, the burgeoning regulatory landscape requiring worker consultation and human rights considerations, workers’ growing calls for participation in the governance of AMS informally and through collective bargaining,[91] the benefits of AMS collective governance, the exigencies of modern labor dynamics, and ethical expectations.

The asymmetrical power dynamics around workplace algorithms provide further justification for expanding directors’ duties to foster a collective governance of AMS. The growing reluctance among corporations to fully embrace their emerging disclosure obligations associated with AMS, also support considering directors’ collaborative duties. Furthermore, the commodification of data and algorithms as shared resources necessitates a reconceptualization of corporate fiduciary duties. Recognizing that these assets also derive value from the collective labor and expertise of workers, directors must acknowledge the moral imperative to uphold principles of fairness and equity in their utilization.

Failing to incorporate workers’ voices and concerns in the design, assessment, and implementation of AMS carries significant risks that can exacerbate existing power imbalances, erode trust and cooperation within the organization, and increase the likelihood of workforce resistance. This neglect can compromise the adequacy, efficacy, and legitimacy of AMS while undermining innovation, job satisfaction, and productivity. Ultimately, these consequences can be detrimental to short-term and long-term company performance. The lack of worker involvement can hinder the early identification and resolution of potential issues, leading to more significant problems down the line. Integrating worker perspectives into AMS governance is not just an ethical consideration, it is also a strategic imperative for sustainable business success.

Thus, directors should bear the duty to foster collaborative negotiations with workers for the collective governance of AMS, including their ex-ante assessment. This may entail imposing collaborative duties on directors and officers such as obligations to disclose workplace algorithm information, coordinate with workers and other stakeholders, negotiate algorithm design and content, address potential algorithm issues, and implement agreed-upon workplace algorithm policies. The following section elaborates on these proposed duties.

Companies and their directors should be explicitly required to disclose their workplace algorithms and their risks[92] and share relevant data with all stakeholders involved in the assessments, including third-party assessors and workers.[93] Requiring directors and officers to disclose their use of algorithms in managing workers promotes transparency and accountability. This disclosure, particularly prior to its implementation enables stakeholders, including workers, unions, and regulatory bodies, to scrutinize and evaluate algorithmic systems to prevent risks, harms, and any violations of human rights.

Algorithmic disclosure duties should be thoughtfully crafted to consider the imperative of companies and their directors to safeguard their cybersecurity, trade secrets, and competitive edge over competitors.[94] It is essential to establish additional obligations for workers, assessors, and regulators to protect the confidentiality of companies’ algorithms, which may include respecting their cybersecurity measures and safeguarding their proprietary information. Directors’ responsibilities regarding algorithm disclosure need not encompass exhaustive or full transparency. Rather, they should be obligated to provide sufficiently useful and detailed information about workplace algorithms. This approach may ensure that all stakeholders, especially workers, have the necessary information to effectively assess AMS. A reasonably comprehensive disclosure or explanation of algorithms strikes a balance between companies’ interests,[95] which promotes innovation and competitiveness, and the essential need to evaluate and oversee workplace algorithms.

Similarly, companies and their directors may be required to negotiate workplace algorithms with all stakeholders, notably workers,[96] avoiding companies’ exclusive control of AMS,[97] particularly when required by law or collective bargaining agreements.[98] Directors and their companies may be mandated to participate in negotiations with workers and other stakeholders concerning algorithmic management, bearing the obligation to adjust algorithms to align them with their companies’ economic goals, risk-mitigation plans, and workers’ human rights.[99] The latter may ultimately ensure that companies adhere to legal frameworks concerning workers’ rights. Directors may also be obligated to facilitate workers’ involvement in impact assessments, including enabling communications among workers and their union representatives and refraining from any form of obstruction, monitoring, or retaliation, including any interference with their collective bargaining rights.[100]

Moreover, companies and their directors may bear the legal responsibility of amending or rectifying any facet of workplace algorithms deemed unfair or abusive following initial impact assessments. These adjustments or corrections should be responsive to the protection of workers’ human rights.[101] These expanded directorial duties are in line with recommendations for companies to actively reduce AI harms and risks. For instance, the US Voluntary AI Commitments on “Ensuring Safe, Secure and Trustworthy AI” also recommends that companies strive to avoid harms and work proactively to minimize AI risks.[102] Companies are also encouraged to take action to promote workers’ skill development, enabling them to benefit from AI[103] rather than seeing their situations deteriorated.

D. Potential Pitfalls and Prevailing Advantages of Directors’ Collaborative Duties

Directors’ collaborative duties seeking to facilitate a collective or multi-stakeholder governance of algorithmic assessments may encounter some pitfalls.[104] Compliance and regulatory costs associated with the implementation of such duties can be significant. Obligations to assess, disclose, negotiate, and rectify workplace algorithms could entail substantial investments for companies[105] that can be detrimental to economic production, their competitiveness, and shareholder value. This situation may explain directors’ potential reluctance to engage in negotiations or implement changes that better safeguard workers’ interests. The challenges of compliance and enforcement are further exacerbated by the lack of AI expertise and resources within regulatory bodies and worker organizations, hindering their effective oversight. Consequently, there is a risk of non-compliance or attempts to circumvent directors’ collaborative duties. Directors may find themselves caught between the pressure to leverage AI for competitive advantage and the need to address workers’ concerns. Moreover, the complexity of AI systems can create a knowledge gap between companies, their directors and workers, making meaningful negotiation and oversight challenging and costly. This imbalance could lead to a scenario where directors, even if well-intentioned, may struggle to fully appreciate or address the implications of their AI-related decisions on the workforce.

The potential for reputational damage stemming from harmful AMS can serve as a substantial incentive for companies and their directors to fulfill their collaborative duties and cooperate with the impact assessments of algorithms and rectify thereof before implementation. Companies and their directors may meticulously weigh the adverse publicity and negative image impact against the potential advantages of retaining existing algorithms without making any alterations. In many instances, this calculation may lean towards preserving the status quo, especially if the cost of rectification is considerably high or if the alterations pose a threat to the company’s business plans or profitability. While reputation damage can wield substantial influence, it may not always overshadow the perceived advantages of maintaining existing algorithms, particularly if the potential fallout appears manageable or tolerable within the broader business strategy.

Worker activism may also play a pivotal role in reinforcing compliance with the extended duties of directors linked to the assessments and regulation of algorithms. As companies are mandated to disclose information and engage in algorithm negotiations, workers are incentivized to become more proactive in their oversight of directors and algorithmic management. This enhanced involvement of workers fosters a climate of accountability within companies.[106] Moreover, when workers actively exert pressure, regulatory agencies are also prompted to ensure compliance with the expanded duties imposed on directors concerning algorithm disclosures and assessments. The collective action and increased vigilance of workers may act as catalysts.[107] This worker activism may ensure that companies, their directors, and regulators remain steadfast in adhering to their obligations associated with the collective governance of algorithmic management, including human rights-friendly impact assessments of workplace algorithms. This heightened scrutiny, driven by worker activism and regulatory oversight can be instrumental in upholding effective assessment and regulation of algorithmic practices within corporate settings. However, this approach must be balanced against the costs associated with negotiation and conflict resolution, which can impact a company’s productivity and performance. Additionally, the efficacy of worker activism in enhancing compliance and enforcement largely depends on the presence of active and well-resourced worker organizations. Non-unionized workers would be less able to engage in such activism.

Moreover, imposing stringent collaborative duties and related liabilities resulting from the failure to negotiate workplace algorithms may have chilling effects on directors and officers. They may be deterred from expanding the use of algorithms or undertaking business risks to avoid liability, which could stifle innovation and growth. Furthermore, AI disclosure obligations may expose companies’ vulnerabilities, potentially facilitate cybersecurity attacks, and diminish their competitive edge. All of these effects can be counter-productive, hampering companies’ overall performance and discouraging growth and investment.[108] Collaborative duties can thus be viewed as encroachments upon the autonomy and competitiveness of companies and their directors. Despite these potential pitfalls, companies may still feel compelled to deploy AMS, due to the perceived competitive advantages and the increasing ubiquity of workplace algorithms in today’s business landscape

Directors’ potential lack of AI expertise could also hinder their active involvement in the collective governance of AMS and their impact assessment. Engaging in disclosure, assessment, and negotiations around complex and rapidly evolving algorithms may require technical expertise and resources. The limited knowledge of the working of AI and the “black box” problem further hinder the assessment process and the determination of the role of directors, including their duties and liabilities.[109]

Various protective measures exist in some jurisdictions to shield directors and officers from algorithm-related liabilities and associated costs. These measures include high executive compensation, due diligence defenses, the business judgment rule, indemnification commitments, liability insurance, and exculpation agreements. However, it is important to note that these protections may not be universally available to all directors and officers or feasible for all companies. Therefore, while these protective mechanisms offer some safeguards, they should not be viewed as a comprehensive shield against all potential liabilities arising from AMS. Directors and officers must remain vigilant and proactive in their approach to AI governance, regardless of the available protections.

These potential pitfalls, which can be costly and detrimental to companies in the short-term, necessitate a cautious and balanced approach when introducing directors’ collaborative duties. A fair and ethical algorithm-assisted workplace environment is likely to lead to higher employee satisfaction, better retention rates, and increased productivity. Moreover, involving workers in algorithm assessments may also help improve directors’ AI oversight and mitigate their potential liabilities, AI ignorance, “black box” problems, and compliance cost, as employer-employee knowledge sharing, mutual oversight, and collective agreements take place ex-ante. These benefits of worker involvement enhance company performance; prevent future conflicts, strikes, and delayed production;[110] protect companies against costly tort liabilities; reduce litigation expenses;[111] preserve business reputation; and may boost shareholder value. Successful experiences of employer-employee collaboration and negotiation illustrate the benefits and feasibility of directors’ collaborative duties and AMS collective governance.[112] A comply-or-explain approach can also be beneficial in these circumstances. Further research will be imperative to delve into the nature of these challenges and explore potential regulatory solutions.

Conclusion

This paper has discussed the merits of the collective governance of ex-ante impact assessments of AMS. The participation of multiple stakeholders, notably workers, in the assessment of AMS may be justified by several factors, including algorithmic management’s impact on multiple stakeholders, the shared ownership of workplace data, the complexities of the new technology, and the need to enhance AMS assessments’ quality and legitimacy. Challenges facing this multi-stakeholder governance of ex-ante AMS assessments may include the potential prioritization of AI risk over workers’ human rights, collective bargaining constraints, and potential opposition from directors and shareholders.

This paper has claimed that introducing directors’ collaborative duties can be critical to ensuring the effectiveness of the collective governance of ex-ante AMS assessments. Such duties would require directors to collaborate with all participating stakeholders, notably workers, by disclosing, coordinating, negotiating, and rectifying workplace algorithms. These collaborative duties may facilitate a multi-stakeholder coordination and democratic governance of workplace algorithm assessments aiming at balancing companies’ performance and the welfare of all stakeholders, especially safeguarding workers’ human rights. Expanded directorial duties can be costly in the short-term; however, they may ultimately improve companies’ performance in the medium and long term for the reasons discussed herein.

Further research is needed in several respects. Firstly, it would be ideal to conduct significant empirical studies to evaluate the working of collective or multi-stakeholder governance of AMS assessments, examining the place and role of workers and unions. Secondly, it is important to further analyze the feasibility and effectiveness of directors’ proposed collaborative duties, including the identification of legal and non-legal barriers to the implementation of such duties in specific jurisdictions and institutional contexts. Lastly, an empirical examination of both the impact of such duties on worker involvement and the ability of workers and unions to engage in governing AMS and their assessments is also crucial.