Stop feeding the hype and start resisting

Three weeks ago, I wrote a blogpost about how ChatGPT is a “stochastic parrot” (a term coined by Bender, Gebru, McMillan-Major, & Shmitchell, 2021; see also this video for an explanation) and when used for academic (and other) writing constitutes automated plagiarism. My aim was to bring the discussion down to earth and prevent that hyped-AI hijacks our attention and dictates our education and examination policies.

With disbelief and discontent, I have since watched academics in The Netherlands jumping on the bandwagon and enthusiastically surfing the AI hype wave, e.g., by talking enthusiastically about ChatGPT on national television or in public debates at universities, and even organising workshops on how to use this stochastic parrot in academic education.

Deeply troubled by seeing my Dutch colleages — both at @Radboud_uni and elsewhere in the country — hyping up ChatGPT rather than help curb the hype, which I think is our responsibility as academics. Why do we let money-motivated AI tech dictate our academic research and debate agendas. We need rather to resist and educate on critical reflection. — (@Iris on Mastodon)

It’s almost as if academics are eager to do the PR work for OpenAI (the company that created ChatGPT; as well as its predecessor GPT-3 and its anticipated successor GPT-4).

Why?

The willingness to provide free labour for a company like OpenAI is all the more noteworthy given (i) what is known about the dubious ideology of its founders known as `Effective Altruism’ (EA) (Gebru, 2022, Torres, 2021), (ii) that the technology is made by scraping the internet for training data without concern for bias, consent, copyright infringement or harmful content, nor for the environmental and social impact of both training method and the use of the product (Abid, Farooqi, & Zou, 2021; Bender et al., 2021; Birhane, Prabhu, & Kahembwe, 2021; Weidinger, et al., 2021), and (iii) the failure of Large Language Models (LLMs), such as ChatGPT, to actually understand language and their inability to produce reliable, truthful output (Bender & Koller, 2020; Bender & Shah, 2022).

(…) the tendency of human interlocutors to impute meaning where there is none can mislead both NLP researchers and the general public into taking synthetic text as meaningful. Combined with the ability of LMs to pick up on both subtle biases and overtly abusive language patterns in training data, this leads to risks of harms, including encountering derogatory language and experiencing discrimination at the hands of others who reproduce racist, sexist, ableist, extremist or other harmful ideologies reinforced through interactions with synthetic language. — Bender, Gebru, McMillan-Major, & Shmitchell, 2021)

As Tamar Sharon, professor of Ethics and Political Philosophy and co-director of iHub at the Radboud University, notes in the Dutch newspaper NRC1: “the ideals of OpenAI are not credible”, the company is “founded by millionaires based on their ideology of Effective Altruism, EA” and while they talk about making “beneficial AI”, so far this type of tech is realised by exploiting cheap labour of underpaid workers and the push to make Large Language Models (LLMs), such as ChatGPT, larger and larger creates a “gigantic ecological footprint” with implications for “our planet that are far from beneficial for humankind”. [quotes translated from Dutch to English, original available in footnote 1].

Why would we, as academics, be eager to use and advertise this kind of product?

Privileged people are left unscathed by the nuanced and system-level issues we touch on (…) these issues are difficult to acknowledge for those in power — they are seen as a sideshow, a political/politicised distraction rather than an essential element of good (computational) science. — Birhane & Guest (2021)

Maybe we, academics, have become so accustomed to offloading our thinking to machine learning algorithms that we cannot think critically anymore (see e.g. Spanton and Guest, 2021; Guest and Martin, 2022; van Rooij, 2020), making us susceptible to believe false, misleading and hyped claims? Or maybe we are afraid to exercise our independent decision making capacity and say “No” to automated bias, hype, misinformation and otherwise harmful technology? Or maybe privileged academics are just fine with enabling the agendas of multimillion dollar companies founded by people motivated by capitalist and bigoted ideologies? Or maybe a mix of these things?

I sure hope not.

In this age of AI, where tech and hype try to steer how we think about “AI” (and by implication, about ourselves and ethics), for monetary gain and hegemonic power (e.g. Dingemanse, 2020; McQuillan, 2022), I believe it is our academic responsibility to resist.

Academics should be a voice of reason; uphold values such as scientific integrity, critical reflection, and public responsibility. Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that will allow them to recognise misleading claims made by tech companies and understand the limits and risks of hyped and harmful technology that is made mainstream at a dazzling speed and on a frightening scale.

[A]s Safiya Noble warns us in Algorithms of Oppression, these platforms aren’t neutral reflections of either the world as it is (…), but rather shaped by various corporate interests. It is urgent that we as a public learn to conceptualize the workings of information access systems and, in this moment especially, that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, informational value, or trustworthiness. — Bender & Shah (2022)

Please join me in resisting and start helping to curb the hype.

As I also said on Twitter and Mastodon, the hype and confusion surrounding ChatGPT is but a tip of an iceberg of the problems caused by AI hype in general. I warmly recommend watching this keynote talk by Emily Bender at last year’s Cognitive Science Conference (CogSci2022) to learn more about this topic

Acknowledgements

I am grateful to Olivia Guest for many discussions that have helped me develop a better understanding of some of the issues raised in this blogpost.

References

  • Abid, A., Farooqi, M., & Zou, J. (2021). Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 298-306).
  • Bender, E.M, Gebru, T. McMillan-Major, A. & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623.
  • Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).
  • Bender, E. M. & Shah, C. (2022). All-knowing machines are a fantasy: Beware the human-sounding ChatGPT. The Institute for Art and Ideas.
  • Birhane, A., Prabhu, V. U., & Kahembwe, E. (2021). Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963.
  • Birhane, A., & Guest, O. (2021). Towards decolonising computational sciences. Kvinder, Køn & Forskning29(2), 60-73.
  • Dingemanse, M. (2022). Monetizing uninformation: a prediction. Blogpost on The Ideophone.
  • Gebru, T. (2022). Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’. WIRED.
  • Guest, O., & Martin, A. E. (in press). On logical inference over brains, behaviour, and artificial neural networks. Computational Brain & Behavior. (preprint: https://doi.org/10.31234/osf.io/tbmcg)
  • McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Policy Press.
  • Noble, S. U. (2018). Algorithms of oppression. In Algorithms of Oppression. New York University Press.
  • Spanton, R. W., & Guest, O. (2022). Measuring Trustworthiness or Automating Physiognomy? A Comment on Safra, Chevallier, Gr\ezes, and Baumard (2020). arXiv preprint arXiv:2202.08674.
  • Torres, E.P. (2021). The Dangerous Ideas of “Longtermism” and “Existential Risk”. Current Affairs.
  • van Rooij, I. (2020). Mixing psychology and AI takes careful thought. Blogpost in Donders Wonders.
  • Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., … & Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.

Footnote

1 In Dutch: “De idealen van OpenAI zijn ongeloofwaardig”, vindt [Tamar] Sharon, … “OpenAI is opgericht door een groepje miljardairs vanuit hun ideologie van Effective Altruism, EA (…) Ze hebben het over ‘beneficial AI’ die in de toekomst menselijke arbeid kan overnemen, maar vooralsnog wordt veel AI aangedreven door menselijke arbeid in lagelonenlanden: tienduizenden onderbetaalde krachten die door de datasets heen spitten. (…) En Large Language Models als ChatGPT hebben een gigantische ecologische voetafdruk: ze slurpen energie. De huidige trend in AI-land is om deze LLM’s steeds groter te maken, want daar gaan ze beter van presteren. Wat dat betekent voor de planeet is allerminst beneficial voor de mensheid.”– Interview by Colin van Heezik in NRC Handelsblad (1-1-2023).