Accountability for algorithms - Internet Newsletter for Lawyers

Accountability for algorithms - Internet Newsletter for Lawyers

Sometimes it is hard to remember that law goes on being made during pandemics, just as it does during wars and in economic crises. The laws made now whilst we battle with the ethical and economic impact of possible further lock-downs and social distancing measures, is intended to regulate future activities, even if we can’t yet envisage specifically what those activities will be. 

As 2021 was drawing to a close, several White Papers and reports were published which reviewed governance of working practices by algorithm and the use of AI at work. Those reports culminated in calls for a new Accountability for Algorithms Act, which was proposed by the All Party Parliamentary Group on the Future of Work in its paper The New Frontier: Artificial Intelligence at Work in November 2021. The proposed Act is intended to establish a “simple, new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA)”. The APPG also proposes that the new Act would “raise the floor of essential protection for workers” and protect them from the “adverse impacts of powerful but invisible algorithmic systems”. 

The APPG’s proposals have to be aimed at the practices identified in the Institute for the Future of Work’s Amazonian Era report published earlier in 2021. Extracts from that paper made headlines when the national press picked up on reports of van drivers speeding or jumping red lights to meet AI-generated delivery schedules; of cameras being installed in delivery vans to detect when a driver yawns; or of warehouse staff being required to put on wearables that detect which muscles are used by warehouse workers in an depot. The Amazonia Era report overall the found that there has been a “significant increase in use of surveillance and other AI technologies that control fundamental aspects of work” and confirmed what many already suspected, that “The practice, tools, and ethos of the gig economy are being embedded across essential sectors without due regard for adverse impacts on work and people.”

Whilst the IFFW report may seem to record an isolated set of circumstances, even workers not engaged in “essential services” have found increased scrutiny of their working practices whilst based at home over the last 18 months. Employers are increasingly trying to instal and workers are increasingly aware of “bossware” software being used to monitor time spent working and programs being used to monitor eye movement around a screen (capturing if someone is not engaged on the same task as colleagues) or monitoring keystrokes to see if a worker is doing their online shopping on “work time”. In some cases bossware even identifies ambient noises of conversations or home appliances such as a TV or washing machine to ascertain what other non-work activities are being carried out in the background during a work session. 

Although an employer may have a clear policy on data use privacy in their staff handbook, even now few have adopted updated and revised WFH policies. Some employers have scrabbled to get staff back to work with modifications to the work environment, and others have accepted that the new normal means a skeleton staff at the work premises and revolving attendance as and when by others. Absent a WFH policy, it can be complex to navigate the laws around monitoring the activities and output of workers, as it varies between regulations under GDPR, the Data Protection Act 2018, the Human Rights Act 1998 (including the European Convention on Human Rights) and the Investigatory Powers Act 2016 and the Investigatory Powers (Interception by Businesses etc. for Monitoring and Record-keeping Purposes) Regulations 2018 (SI 2018/356) and includes common law rights of a duty of trust and confidence which are implied into all contracts of employment and exist to protect employees at the broadest level.

All of this points to the APPG being bang-on when it said in its report that “There are marked gaps in legal protection at individual, collective and corporate levels.” This culmination of technology and recognition of the powerful influence it now exerts on our daily lives may be the result of the perfect storm of AI advancement at the same time as mass digital and remote engagement, or it could genuinely be the future of work and if it is, how should AI’s powerful and pervasive prominence be addressed to make sure work done by people stays human? 

The proposed Accountability for Algorithms Act is intended to establish “a simple, new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA).” By obliging all employers using AI to undertake an AIA, regulation is intended to plug the gaps in areas where AI operates, but which have previously not been considered for regulation or legislation. The new law would look to also create new collective rights for unions and other specialist third sector organisations, enabling them to exercise new duties on behalf of their members or on behalf of their interest groups. Under the proposals, the freshly-minted joint Digital Regulation Cooperation Forum (DRCF) is likely to be expanded and to be granted new powers to create certification schemes, as well as having more regulatory influence through a right to issue statutory guidance, and through rights to suspend use or impose terms and supplement the work of regulatory entities. 

The four planks for determining regulation of AI (as they were titled in the APPGs report), propose that existing principles of AI governance are put on a statutory footing along with ethico-legal considerations. By establishing a new corporate duty to undertake and disclose AIAs from the earliest stages when AI is envisaged as being used in a system at work, rigorous assessment of the effect of that AI on work and workers must be undertaken. The AIA will be required to reference the good and the bad impacts on “good work” and identify the individuals and communities most likely to be impacted by algorithmic decisions and, crucially, to keep those decisions under review. 

The second plank is “updating digital protection”, including granting workers an easy to access right to a full explanation of purpose, outcomes and significant impacts of algorithmic systems at work, including sight of the AIA. This last thread is a democratisation largely unseen before in terms of creating corporate obligations and thus risks less scrupulous employers making AIAs somewhat vanilla in the knowledge they may be required to publish these. If this sounds sensationalist, or just cynical commentary from a jaded lawyer, the APPG’s report makes salutary reading, giving examples of workers feeling constantly on call, or of staff no longer talking to each other in order to keep pace with their algorithm-planned work schedules and the lack of care and consideration for “good work” is apparent. The astute comments from the Institute of AI that a person who does not understand how a decision about them has been reached cannot possibly know how to go about rectifying that decision are insightful, and it is not so very hard to imagine therefore that certain employers will do what they can to ensure that once a decision is made by their favoured AI system, it can be interrogated only minimally.

The third recommendation calls for a “partnership approach” to work with business and unions to test out new interventions and models of work. This puts unions in the centre of this discussion, calling on their experience in pushing for workers’ rights to form the basis of this new drive. 

The final recommendation is that the Government’s Digital Regulation Cooperation Forum should be extended and given new regulatory powers. The aim of this is to make the UK not just a world leader in AI and innovation, but in governance related to it. Whilst the State of California is currently consulting on A Bill of Rights for an AI Powered World, the APPG sees opportunities for Britain to set a level playing field with a set of rules everyone abides by to “ensure the responsible innovation everyone wants to see” as Jeremiah Adams-Prassl of the University of Oxford, Faculty of Law described it in the report.

The AAA will overall support creation and implementation of human-centred AI. It looks to apply the principles in the Good Work Charter proposed by the Institute for the Future of Work. Those principles are:

None of these tenets seems unreasonable, do they? Surely in our 21st century economy and working environments, ensuring all of these principles are met should not be a “nice to have”. In the end, humans program AI and AI learns from the data it is trained with. Giving AI the best training in “good work” and in what it means to be human centred could, in the end, result in systems that are better than those implemented in the last century by small-minded penny-pinchers in the form of ‘time and motion’ studies which no longer speak to the modern workforce or workplace. 

Let’s hope that the lofty aims of the AAA are not diluted during consultation or progression through the Houses and that we are able to make the shift from accountability of workers by algorithms to accountability for algorithms by employers. 

Joanne Frears is IP & Technology Leader at Lionshead Law, a virtual law firm specialising in employment, commercial, technology and immigration law. She advises innovation clients on complex contracts for commercial and IP matters and is a regular speaker on future law. Email j.frears@lionsheadlaw.co.uk. Twitter @techlioness.

Images Powered by Shutterstock