Home Projects THE FUTURE OF WORK – ALGORITHMIC MANAGEMENT AND WORKERS’ RIGHTS

THE FUTURE OF WORK – ALGORITHMIC MANAGEMENT AND WORKERS’ RIGHTS

0

This is the last text within the DiDaNet Project, coordinated by Austrian Trade Union Confederation OGB and supported by Austrian Ministry for Social Affairs, Health, Care and Consumer Protection.
Global digital platforms have led the way in introducing algorithms and artificial intelligence into the world of work. Algorithms, not man, now operate in many spheres of work: from making decisions about choosing new staff to assigning jobs and firing them from work. Bloomberg’s article describing how platform workers lost their jobs thanks to algorithm decisions had excellent readability worldwide, including Serbia, and focused the expert public’s attention on the issue of managing algorithms. What may be less visible at first glance, but extremely important, is that together with the rise of platforms, we have gained new labour regulators who, unnoticed by lawmakers at the national level, have changed the rights of workers working on them.
When algorithms in the world of work make a wrong decision, the workers suffer the consequences, not those who introduced the algorithm. At the same time, algorithms operate in a “black box,” making decisions based on large databases that often contain many prejudices that are woven into them. Machines receive data whose validation and selection are made based on the views of those who made those programs as machines today are also a reflection of those who programmed them (racism was detected in the United States in the selection of employment priorities). The algorithm is a set of commands that operate on the principle of “if-then”: In this case, if a candidate for a job is African-American, they will automatically be “lowered” to a lower priority level to get the job in question). Therefore, it is very easy to imagine that this algorithm, popularly called artificial intelligence, is just a set of commands that are executed according to the client’s idea (the one who owns such AI software). Therefore, algorithmic transparency has become an essential issue for many stakeholders, including trade unions and policymakers. Both Europe and the United States are intensively considering the ethical dimension of artificial intelligence, and one of the areas where the use of artificial intelligence is assessed as potentially risky is the world of work.
This was discussed on November 25th at the fourth national conference, “The Future of Work 2021 – Good Innovations,” organized by the Centre for Public Policy Research (CENTER). Mareike Melman, assistant professor at the Department of Information and Process Management, Bentley University in the US, has devoted part of her research work to study the impact of algorithmic management on managing work on global work platforms – from those dealing with the food delivery and transporting passengers to those where work is done online, such as, say, Upwork and Guru.
Workers on food delivery and transportation platforms receive detailed real-time instructions on their smartphones on where to go and what or when to pick up, putting pressure on couriers to expedite delivery frequently and harming their traffic safety and occupational safety and health. Algorithms are also in charge of calculating workers’ performance, awarding bonuses, and identifying with those with low performance that are “punished”. This includes supervision and monitoring of work and mechanisms for evaluating workers, which has traditionally been the job of managers.
Food delivery platforms evaluate workers based on customer ratings, acceptance or rejection of delivery requests, or order cancellation rates. This sometimes results in disciplinary measures and often temporary or permanent deactivation from platforms, which means that the delivery man has lost his job.
Very often, once deactivated, a worker has no way of addressing “human” management to explain whether they really made a mistake or if there was a mistake at all.
In recent years, the EU has taken a series of steps towards greater transparency in the workplace, and now the US has raised the issue to one of its most important priorities. As for platforms, it insists on changes that cover both “input” and “output” in algorithmic decision-making. By this, he means primarily controlling the work of algorithms before they are applied in the workplace, free access to “code,” and other approaches in strengthening the transparency of the functioning of algorithms. After all, it is insisted that the last instance to decide on disciplinary action or firing workers from the platform is the man.
Mareike Melman believes it is not enough to establish good legislation in this field, but above all, to insist on the ethical dimension of the irresponsible use of algorithms. So, Uber came under heavy public scrutiny in Britain when it emerged that it had laid off several of its workers based on misleading data. “I believe that in the future, a company that maintains its reputation should certainly take care of responsible use of algorithms,” Melman said.
The GDPR, the (European) General Data Protection Regulation, has established essential boundaries for data protection rights, including access, deletion, rectification, and transferability, but has also helped ensure algorithmic transparency in the workplace. First, the GDPR requires that personal data be “processed in a lawful, fair and transparent manner”. It requires the interested party to obtain clear and meaningful information on how the algorithm makes decisions, the possibilities for error, and the consequences of such an error.
Also, the GDPR insists that the man has the last word in decision-making and that the worker must have the right to present his arguments in the process. At the same time, the GDPR has also proven very powerful in enabling workers to access data collected by the platform about them as evidence that the platform has arbitrarily suspended or denied them payment.
Melman, however, believes that both the GDPR and other control systems that are being developed are rigid, slow, and insufficient to allow workers to protect their rights in a reasonable time. This is primarily because they are only effective when the damage has already been done. There are long years of trial and proving violations with many complicated technological nuances – the process where platforms are much more successful and resourceful than unions, workers, and courts.
What makes algorithmic management important when we talk about workers is another question: proving whether the platforms are, in fact, employers. The fact that they shape and manage the work process demonstrates that suppliers to whom platforms impose the role of independent subcontractors are, in fact, workers.
To address the problems of artificial intelligence inside and outside the workplace, the EU has proposed an Artificial Intelligence Regulation, which goes much further than the previous regulation and requires an explanation before a decision has been made, whenever the algorithm is categorized as “high risk,” which is precisely the case with the labour area.
However, both Melman and many other researchers worry that companies have been given the right to conduct internal audits of these processes themselves.
Within the European Reshaping Work initiative, based in the Netherlands, of which the CENTER is a member, several working principles have been proposed that should mean improving this process:

  • Workers have to be informed when an algorithm is applied, with information on what would produce a different outcome;
  • Workers have to be informed of their right to have their data deleted, their right to access data, the possibility of their portability (the possibility of retrieving that data from where it has been stored and transferring it to another location), and the correction of data;
  • Workers have to be notified of measures implemented to promote equality and human rights and avoid bias.

Essentially, such initiatives aim to provide workers with complete information on the processes to which they will be exposed and give their informed consent to such conditions – to challenge or reject them. This expands the field of control established through the GDPR.
However, measures that are important for controlling the application of artificial intelligence in the field of work are not only related to technologies and technological processes.
Social dialogue has one of the essential roles in regulating these issues, primarily through the participation of trade unions in the design and co-management of algorithms to simultaneously meet the needs of the company and protect the rights of workers. An “upgraded,” modernized dialogue between unions and employers could also be established through social dialogue. This dialogue and interaction would develop voluntary codes of conduct for workplace algorithms or new and innovative solutions.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version