What unions need to know about ChatGTP and generative AI right now

Related posts

Sign up for updates

More than 580 union leaders, campaigners and organisers subscribe to my email newsletter.

The advent of generative AI has the potential to transform various industries, and the trade union movement is no exception. As automation and AI technologies continue to advance, they are causing disruptions across the global labor market, challenging workers’ rights, job security, and traditional labor relations. However, the emergence of generative AI may provide a new set of tools and opportunities for trade unions to effectively navigate this new terrain, both in terms of representing their members’ interests and adapting as institutions to these changes.

This is the opening paragraph from ChatGPT — the latest and most famous generative AI tool. My prompt was:

Write the opening paragraph for a blog post about how transformative generative AI will be for the trade union movement, including impacts of AI and automation on workers’ rights, members’ jobs and for unions as institutions.

For first-time users of ChatGPT and generative AI, entering a prompt into ChatGPT is generally all it takes to see the potential of this new tool.

In the last few months, millions of users have logged into the platform, and scores of tech companies have rushed to integrate their services with ChatGPT.

The hype machine for AI chatbots is in overdrive.

The “game-changer” for this new model of AI is that it can create content (written, code and visual) that was previously thought to be only able to be produced by humans.

This new type of generative AI systems uses massive databases of content that cover trillions of data-points. (Newer variant models seem to require fewer data-points.)

Similarly, these types of AI are unpredictable — their new behaviours and “hallucinations” cannot be currently predicted because even the creators of the AI algorithms don’t reliably understand the inner workings of the program.

Finally, it is becoming clear that even the creators of the AI tools don’t have a reliable way to steer their future development or behaviours.

What does this mean for unions? New technologies are developed all the time, so what makes generative AI and ChatGPT different? How should union leaders proceed when it comes to AI, and what practical challenges will it present for workers, members and unions as institutions in the near term?

In this article, I’ll go over some of the key things I think that unions need to know now about generative AI and ChatGPT.

It’s worth also re-emphasising: the views in this article are solely my own, not my employer’s.

Generative AI as a stored-labour system

In simplistic terms, ChatGPT and its variants, are a form stored labour — the product and store of significant mental labour by its writers and developers. The algorithms and code that make up generative AI are effective the stored labour from years of research and development by programmers and developers. In addition, the content produced by generative AI is derived from the work of human writers, whose own mental/intellectual labour is literally stored in the system’s database, or drawn from “live” web content written by humans.

This is important because it helps us understand that AI doesn’t create new value, but rather it releases the stored labour in its code and database. Therefore, labour is still the source of the value generated by the output of generative AI.

Generative AI is different to previous types of AI writing because it allows for natural language queries to be rapidly responded to by the “chat bot”, which draws on text and data in its underlying database.

Software developers and researchers can easily adapt the models for a wide range of use cases, with little fine-tuning required for each task. For example, software developers can “prompt” the AI to write programming code, analyse research data, and even scan medical images. This is also rapidly being deployed in the finance and legal sectors, as well as marketing and online sales.

In this way, a range of new capabilities are made available to people who lack specialised analysis or machine learning skills.

For knowledge workers, ChatGPT and generative AI is the equivalent to the creation of the spreadsheet and Excel in the 1970s and 1980s.

Spreadsheets allowed for advanced and complex calculations to be done by computer, far more quickly and easily than previously. (Calculators and spreadsheets are another type of stored labour — the labour in this instance being mental labour.) ChatGPT and generative AI is the equivalent:

Trained on vast databases of human texts, from books to articles to internet content, programs such as ChatGPT, Bard, Bing, and Claude function like sophisticated autocomplete tools, identifying and predicting phrase patterns, which makes their output feel somewhat predictable, too. 

In the same way that word-processing and spreadsheets had significant implications for the typists and accountants who used mechanical typewriters and calculators, AI will also have massive impacts on knowledge and creative workers. Generative AI is likely to be similarly disruptive as other labour storage/automation/mechanisation tools deployed for manual labour and industrial jobs — hugely productivity boosting, but also job displacing and allowing capital to capture even more value and profits.

For example, one of the professions that most heavily uses ChatGPT are software developers. Generative AI is significantly increasing their productivity (the value of which of course is almost all captured by their employer). This can be seen with the chart below, showing the decreased time they spend on code Q&A site Stack Overflow.

An ACTU paper from 2018 noted, “One reason that, without intervention, technological change is likely to exacerbate income inequality is the opportunities it provides for circumnavigating labour standards.”

Where workers’ power is low, the benefits of labour-storage and productivity improvements are almost all captured by company owners. (This would certainly be the case, for example, if Hollywood production companies are allowed unrestricted use of AI to write or produce movies and shows.) Consequently, there is a huge risk that generative AI will massively worsen wealth inequality.

Don’t get caught up in the hype

The current crop of AI tools (ChatGPT especially) are subject to the Silicone Valley hype machine (AKA, “snake oil“). The advent of these tools is not an apocalypse and it doesn’t herald the destruction of millions of jobs, despite what breathless Forbes articles may report. However, it does potentially herald the continued and rapid transfer of wealth to the Silicone Valley billionaire class, and the continued devaluing of workers’ skills, especially (and increasingly) creative and knowledge workers.

Generative AI and “large language learning models” are one particular type of AI — albeit an impressive one.

An easy way to think of generative AI and ChatGPT is like a “calculator for words“. It takes its database of words (or other data) and uses algorithms to determine the relationship between those words, how frequently they’re associated with each other and with concepts, and then in response to a prompt guesses the order of words or sentences in a sequence.

Many wild predictions have been made about the disruptive nature of generative AI, but they are often overblown (excepting the risks about mis/disinformation, and increased concentration of corporate power/profiteering).

By way of analogy, think about the most profound information technology developments of the last few decades, like the internet and smartphones. They have reshaped entire industries, but we’ve learned to live with them. Some jobs have gotten more efficient. Some jobs have been automated, so people have retrained themselves, or shifted careers. There are some harmful effects of these technologies, but we’re learning to regulate them. 

Even with something as profound as the internet or search engines or smartphones, it’s turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution.

Decoding the Hype about AI

The caveat to this rather optimistic view is that the “harmful effects of these technologies” are almost all experienced by workers. Permanent unemployment, or permanent insecure and precarious work — even for workers whose jobs have “gotten more efficient” feel the effects of a hollowing out of the labour market.

The point I am trying to make is that society and economies adapt to technological change, but left “to the market”, it is workers who take on the economic risk of technological transition. There is not likely to be a major societal or economic revolution in the short term arising from AI. As I note later, taking a “just transition” approach to the impacts of generative AI would seem prudent.

Pushing further into knowledge and emotional work

As a stored labour tool, generative AI is different because it is starting to move into areas of work previously thought to be unique to humans.

The pace of change will also likely be slower than the AI hype machine proponents suggest. There are of course companies that are early adopters of new technology, but the vast bulk of employers and companies are slow to adopt new technology.

Most academic studies on the impact of artificial intelligence and machine learning before 2023 assumed that high-skill knowledge work would benefit more from AI and technological change.

This impacts knowledge workers, especially workers in occupations/industries such as HR, engineering, marketing, finance, programming and software development, management, journalism and creative writers.

Presently, generative AI is unreliable and unpredictable. (For example, it creates fake legal cases and citations, which is… problematic.) This limits its widespread use — but as I noted, the advances for ChatGPT and its variants is very rapid. It will be a matter of months or a few years before generative AI is far more accurate when it comes to producing reliable outputs on legal or financial matters.

Regardless, a likely consequence of this will be to increase the productivity of existing knowledge workers. For example, ChatGPT probably won’t replace a large number of lawyers, but it will significantly increase the productivity of a lawyer or clerk by performing much of the administrative work or drafting work. Similarly, software developers won’t necessarily be replaced, but generative AI will allow a single developer to code much faster. This will accelerate as Microsoft and Google integrate generative AI into Word/Excel and other Office tools. (Of course, absent regulatory changes and union action, the corporations will also capture the vast majority of the financial benefits of this increase in productivity.)

Because generative AI is also increasingly good at producing human-like responses to prompts, there are also a number of start-up online “emotional support” services that use generative AI for counselling/therapy. This is, currently, potentially dangerous because AI doesn’t know what it is doing or whether it is giving helpful or harmful advice. It will be a while before generative AI can be used without human supervision.

However, the role of AI in creative writing and performances, which is the basis of the WGA and SAG writers and actors strike in the US, is an area where AI is being used by companies to significantly replace human labour. This is a text-book example of companies trying to use automation/mechanisation to replace or reduce human labour, and capture the value/profits from doing so.

“I hope I’m wrong, but I do think that the use of AI is going to take over the entertainment industry,” said Justine Bateman, a member of the writers, directors and actors guilds. “And I think it’s going to be really bad.”

It is even more problematic because, as noted, generative AI is a labour storage system — so the content produced by the AI platform is drawn from the unpaid labour of the writers and artists whose work has been “ingested” into the platform’s database. This means both the impacted employee and the creator of the “ingested” content are both disadvantaged.

Disputes like the US writers and actors strikes are hugely consequential. It is very important for workers globally that WGA and SAG win their strikes.

Just transition for knowledge and creative workers

In 2017, Phil Jennings, then-Secretary of UniGlobal, said:

Data collection and artificial intelligence are the next frontier for the labour movement. Just as unions established wage, hour, and safety standards during the Industrial Revolution, it is urgent that we set new benchmarks for the Digital Revolution.

While AI can seem new and different, unions have tackled the impacts on workers of disruptive technology in the past — Uber, decarbonisation, factory automation etc.

For example, back in 2018, the ACTU presciently wrote a paper on the impact of AI and automation. It made the important point that “transitions are, more often than not, economic disasters for working people and their families.”

This is certainly a likely possible future for many workers and industries being impacted by generative AI.

While there will undoubtedly be new jobs created as part of these changes and it is not yet known to what degree the utilisation of artificial intelligence, automation, big data and other new technologies will lead to an acute transitional period of increased disruptive change, it seems inevitable that, without appropriate planning, the ‘Digital Revolution’ has the potential to exacerbate the current dual crisis of inequality and insecure work in Australia in the short, medium and possibly long-term.

ACTU, Responsible Innovation: Whither Workers?

In the paper, the ACTU supports the creation of an organisation “to oversee the impact that AI and automation will have on our society. …That organisation needs to be representative of workers and include the ACTU and other relevant trade unions. It also needs to have the capability to undertake transition planning for affected workers and to quantify not merely the economic opportunity inherent in these changes but also their costs.”

The just transition principles that we as a movement have developed for climate-impacted workers should likewise be extended to workers facing disruption from AI.

Unions could also look at what could be done through collective bargaining — this is the basis of the WGA/SAG strikes in the US. For example, should there be requirements about the use of generative AI in enterprises, about consultation with workers, and of course ensuring that workers share in productivity gains from use of AI, e.g. through higher wages.

Another area that will impact workers was highlighted by the CEO of Freelancer.com, who made the point that “AI is turbocharging the freelancers, because you can be relatively unskilled but use these tools to design at the elite level.”

This highlights that generative AI will further erode the benefits that skilled workers have. Not mentioned by the Freelancer.com CEO is that it is likely to be corporations and rent-seeking market middle-men (like Freelancer.com and other “marketplaces”) that will benefit, rather than the “relatively unskilled” workers.

Consequently, there is a big role for governments to step in and ensure there is an orderly transition. Regulations (not written by self-serving AI companies) are needed to ensure that profits are shared with the creators whose workers were ingested into AI databases. Ethical use rules, anti-bias and anti-discrimination rules are needed. And restrictions and mandatory guidelines on the use of AI in people management and HR are needed. (Not to mention restrictions on the use of AI in areas like the justice system, finance and debt-collection/credit ratings, etc. There’s a lot, as I note in the next section on social license.)

Social license for AI

Like every technology before it, AI and the companies who use it, need social license.

This social license hinges on how the public perceive the benefits and costs of AI, and importantly who accesses the benefits and pays the costs.

Research by KPMG and the University of Queensland from 2023 shows that in most countries and in Australia, trust in AI is less than 50% (with trust highest in the BRICS countries).

<a href=httpsassetskpmgcomcontentdamkpmgaupdf2023trust in ai global insights 2023pdf>Trust in Artificial Intelligence Global Insights<a> KPMG and University of Queensland 2023

Other surveys show that for the public to trust AI systems, there needs to be strict oversight, regulation and codes of conduct. Presently, there is almost no regulation of AI, although this could rapidly change.

When it comes to regulation, the ChatGPT creator, Open AI, is one of the most vocal in pushing for greater regulations restricting the development of new or more advanced AI systems. While this may seem altruistic, in fact it is a standard Silicone Valley corporate ploy — grab market leadership, then establish a regulatory “moat” to stop competition.

What is clear is that the companies who own and develop AI systems should not be in charge of creating their own self-serving regulations and codes of conduct. Absent strong civil society and union action however, this is likely to be the outcome.

Our foremost concern should be preventing massive, unnecessary suffering of the sort that we know is possible given existing and soon-to-exist technologies. Rigorous studies of real and present AI-induced harms have been published, and potential solutions have been proposed.

For example, facial recognition technology can be used for tracking individuals and limiting basic freedoms, and generative image technology can be used to create false images or videos of events that never happened. To address these issues, calls to action have been made, including the Montreal Declaration on Responsible AI and the World Economic Forum’s Presidio Recommendations on Responsible Generative AI.

Richards, Arcas et al

Laws and regulations on the ethical, public-interest development and use of AI are urgently needed. Governments need to be wary of allowing the AI companies to write those laws.

Data and privacy

As I’ve noted earlier, these tools use huge databases to work. Dr Christina Colclough, a global expert on unions and worker privacy provides a sound word of caution for unions considering using ChatGPT and generative AI:

The moment a union uploads what could be sensitive information to these systems, is the moment these commercial companies own that data and can reuse it. So my recommendation is: never upload membership data, conversations, union meetings, databases or spreadsheet files that in any shape or form can be de-anonymised or otherwise linked to a person or persons.

Many companies are wary of their staff using ChatGPT for this reason, and unions should likewise be very cautious.

It is possible to use a self-hosted version of a generative AI tool, or to run a version of ChatGPT called Azure OpenAI, which doesn’t hand over your union’s private information. This is quite technical, so is likely only an option for larger unions with experienced IT staff.

The whole area of worker data and worker privacy is one that I’ve written on extensively, e.g. here about union’s own data security, and here about worker data and data trusts.

Excitement should be balanced with caution

While generative AI is very trendy and exciting, there are areas that we should be cautious, wary and concerned about, not just as unionists but as people concerned about corporate oligopolies, civil society and democracy.

Here are just a few:

  • AI will further concentrate power with corporates/billionaires: Almost all of the value from generative AI is likely to be captured by the Silicone Valley billionaire class (absent legislative action and union/civil society campaigns). From this perspective, AI advances will further entrench structural wealth inequality.
  • Intellectual property and unpaid labour: Generative AI is the stored labour of the workers/people whose writing/art/code has been ingested into the AI database, plus the stored labour of the original machine learning algorithm programmers. AI does not create anything new, rather is iterates/calculates based on human created content. Of course, almost none of the companies creating generative AI tools are compensating the workers whose art and writing has been ingested into the AI database. Cases are already going to court to test intellectual property rights, but in the future there will also likely be cases about financial compensation. It’s worth noting that Kenyan workers paid $2 per hour were responsible for “training” ChatGPT to be less abusive, and were exposed to traumatic, graphic and abusive content. The future development of generative AI will continue to be very exploitative.
  • Like humans, generative AI can be wrong. The term used by techies is “hallucinations”. This means that ChatGPT etc will confidently generate entirely false, fabricated responses to a user prompt. It can fabricate almost everything, including false citations and false “facts” — and no generative AI tool has any built-in mechanism to detect or signal this to the user.
  • Systemic biases. Almost all generative AI have been shown to have serious systematic biases. These include AI-generated images defaulting to showing only white people, to restricting responses on political issues. It is deeply problematic to allow corporations to decide whether certain concepts are “off-limits” when using powerful word generating tools. It is especially concerning that many tech companies like Microsoft and Twitter have recently sacked their AI ethics teams.
  • AI will exacerbate gender inequality. Increasingly research shows that women will be most disadvantaged by the deployment of AI, especially in the workplace. This in part is because many of the jobs that face replacement by generative AI are more often performed by women.
  • Worsening climate impact from AI energy use. The data centres powering artificial intelligence systems, like ChatGPT, requires enormous amounts of water — about 68,000 litres a day — not to mention the vast energy consumption. This is because “the training process, in which large amounts of energy are used and converted into heat, requiring a staggering quantity of freshwater to keep temperatures under control and cool down machinery. Further, the model also consumes a significant amount of water in the inference process, which occurs when ChatGPT is used for tasks like answering questions or generating text. For a simple conversation of 20-50 questions, the water consumed is equivalent to a 500ml bottle, making the total water footprint for inference substantial considering its billions of users.” Needless to say, the climate impacts of unregulated AI systems is substantial and risky!

Initial questions for union leaders

In unions considering the challenges posed by generative AI, leaders will want to quickly identify the parts of their membership where the technology could have the most immediate impact.

Some of the early, basic questions to think through could include:

  • Given that AI is expected to change quickly, what mechanisms can you implement to monitor its use and deployment by employers? For example, can you detect if an employer is planning to roll-out ChatGPT to replace workers in a particular function (e.g. customer service)?
  • Where might the technology disrupt your membership? Are the major employers in your coverage susceptible to disruption by AI (and would this impact your members’ jobs)?
  • How are employers deploying AI into systems in their business? Are they using AI in human resources decisions, or WHS? Are AI systems used to monitor workers? (And what about worker data collected by employers?) Many of these tools will start to be added to the platforms used by employers to manage staff, like Xero, BambooHR or EmploymentHero.
  • What are your union’s policies and posture towards AI? For example, are you waiting to see how the technology evolves or actively looking to insert consultation clauses for deployment of AI into a collective agreement?
  • What is your union’s view on how government should respond? For example, ACTU Congress policy broadly states that with digitisation and automation, the benefits should be equally enjoyed by workers and companies — what is the role of governments and bargaining in implementing this?

It is essential for unions to be proactive when it comes to generative AI — while there is a lot of hype, this is an area that will advance rapidly and unexpectedly. This means union leaders will want to remain aware of the risks that exist at this early stage.


Read more: How unions can use generative AI right now.


The featured image for this post was also created using generative AI — Dall-E 2.

This article has been updated since it was first published in April 2023.

%d bloggers like this: