On The Guardian’s wildly irresponsible GPT-3 op-eds
In the past week The Guardian published a controversial op-ed piece generated using GPT-3 headlined ‘A robot wrote this entire article. Are you scared yet, human?’ They came in for some criticism about contributing to hype and misinformation. The piece does have problems: it fails to properly contextualise GPT-3, it anthromorphises the technology, and it is misleading in several ways.
The headline asserts that a robot wrote the article. That isn’t true. Least importantly, GPT-3 isn’t a robot, it’s a computer program - it doesn’t have any robotic body. But more importantly, it didn’t ‘write’ the article. It generated eight different pieces of text based on The Guardian’s prompts, which were edited together by human beings. You have to get to the note after the article to see this information: ‘This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it. [...] Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places.’
This note uses only technological terms-of-art (language model, machine learning) which are not interpreted for those who may not understand them clearly. Nor does it describe the limitations of this technology. Nothing in this piece prevents someone who isn’t au-fait with machine learning from getting the impression that ‘thinking’ software is speaking on its own behalf - the topic chosen (why humans have nothing to fear from AI, written in a first-person voice) frankly encourages this. This is simply irresponsible.
What’s really strange is that The Guardian knows better than this - it wrote a decent article on GPT-3 in August that explained the technology clearly in terms the average reader should understand, without hype, and including its limitations; describing it as a ‘a souped up version of the auto-complete function that most email users are familiar with.’ GPT-3 generated text samples that accompanied that piece of writing and illustrated some of the capabilities and shortcomings mentioned could have been a responsible piece of journalism.
The Guardian followed up three days after the op-ed with a second comment piece, ‘How to edit writing by a robot: a step-by-step guide’. This article was almost more objectionable than the initial op-ed, concluding that ‘GPT-3 is far from perfect. It still needs an editor, for now. But then most writers do. The question is whether GPT-3 has anything interesting to say. Based on some of its biting commentary – “Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing” – we think it almost certainly does.’ This piece continues to hype the technology. It also doubles down on anthropomorphising GPT-3 - referring to it as a writer, as having things to say, and making commentary. Humans say things and make commentary. GPT-3, according to The Guardian’s own earlier description ‘doesn’t know what it is doing; it is unable to say how or why it has decided to complete sentences; it has no grasp of human experience; and cannot tell if it is making sense or nonsense.’ GPT-3 generates text. It is not a writer. It does not have things to say. It does not make commentary.
I am a technology professional. The ACM’s Code of Ethics for computer professionals enjoins us to ‘be transparent and provide full disclosure of all pertinent system capabilities, limitations, and potential problems to the appropriate parties.’ Accuracy and clarity are supposed to be cornerstones of journalistic ethics too, as best I understand it. This should apply as much to technology and science reporting as to news or any other sort of journalism, and needs to include contextualising complex topics (like GPT-3) adequately for average readers.
What is the harm of this sort of misleading journalism?
We live in a time when technology, particularly AI and robotics, is routinely overhyped: the accuracy and capabilities of technology are overstated and limitations are underreported. Technology is also often anthropomorphised, as we see here with GPT-3, explicitly or implicitly ascribing humanlike qualities or capabilities to it. In a culture that has spent decades soaked in fictional representations of software and robots that are highly capable, thinking, and even feeling, this sort of anthropomorphisation actually matters. In reality, we’re nowhere close to an Artificial General Intelligence, but we see them every week on TV, and use of anthropomorphic language to describe some of the spookier AI systems (such as GPT-3) can make it harder to remember their limitations. This is as true of decisionmakers as anyone else.
Alongside the hype, we see software being used more often in ways that have real impact on people’s lives: policing, justice, healthcare, sorting through job applications, social welfare decisions, warfare, and so on. There are huge commercial interests involved here, and there are also incentives for leaders in government and private companies to adopt technology (potential cost savings and appearing ‘cutting-edge’).
GPT-3 is still in beta, and it’s unclear as yet how it will be used. Like most technologies, it probably has potential for both good and bad effects on human wellbeing. Either way, if people don’t have clear information about technology then we cannot have an informed public conversation on these matters. This is the foundation of democratic society, and in matters of technology, as in other arenas of public interest, the media has a responsibility not to disseminate misleading content.
This content was 100% created by a human being.