I wrote the below email, when invited to review a paper that was partially processed by ChatGPT. As I imagine the email could be useful inspiration for others, I decided to make it available.
Feel free to adapt and reuse. No attribution needed, but if you find this post helpful and reuse/adapt the text for your own purposes, I would appreciate if you can include some of the links to our work where appropriate and useful. Thank you! If you make an adaption that you also want to share, I’d love to know about it, too.
**************************************
Dear Dr. [redacted],
[Intro text redacted]
I was just reading the manuscript to prepare my review and then noticed this statement:
“Declaration of AI Use [redacted] “
I regret to have to inform you that I cannot review work that is partially processed by a product that in my opinion fails to meet standards of scientific integrity that I need to comply with.
“ChatGPT-[version redacted]” is a non-transparent proprietary product created with corporate biases, through exploitative labour practices, violating authorship rights, and having known psychological, societal, and environmental harmful impact. Moreover, notwithstanding the hype that OpenAI has been fuelling, a product like ChatGPT cannot [redacted], because [redacted] is a cognitive capacity, and the product has none. All in all, the use of ChatGPT for scientific purposes, in my honest opinion, clashes with core scientific integrity principles such as honesty, scrupulousness, transparency, responsibility, and independence.
Therefore, I need to decline this invitation to review.
To better understand my position, I recommend the following papers from me and my collaborators (see my signature for more resources):
- Guest, O. & van Rooij, I. (2025). Critical Artificial Intelligence Literacy for Psychologists. PsyArXiv. https://doi.org/10.31234/osf.io/dkrgj
- Guest, O., Suarez, M., Müller, B., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2026). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Forthcoming in Journal of Digital Culture & Education. https://doi.org/10.5281/zenodo.17065099
I am sorry I cannot assist you further with this submission. I hope to be able to be of service on a different occasion.
kind regards,
Iris van Rooij
**************************************
Author Note
My refusal is a personal professional choice and not meant to be prescriptive. That said, I would encourage all scientists to make their own choices to review, or not, consciously, thoughtfully and after reviewing applicable codes for research integrity. I also hope that seeing others (like me) refuse, can help empower those who would like to refuse too. In case of doubt, reach out for advice or support from trusted experts and/or allies.
More Resources
Radboud Critical AI Literacy website
Olivia Guest‘s Critical AI website
Summer School: Critical AI Literacies for Resisting and Reclaiming