18 Dec 2024: The manliness of AI

After looking at an artistic approach to AI, then computer vision, in this session we focused on language – as used by Large Language Models (LLMs) like ChatGPT. Liz Jackson, a humanities scholar, laments how LLMs promote not only a masculine way of communicating, but (unsurprisingly) a very Silicon Valley-startup style: favouring langauge that is punchy and brief. [FYI this aligns with my recent study of temporality in Silicon Valley.]

Another participant thought the article might have been about the prevalence of (preference for?) female voices amongst AI agents. We discussed how female voices, too, seem to embody a masculine, to-the-point style of speaking. We also discussed cultural differences – for example, the author is a UK academic, and one of our participants mentioned working with German engineers, who perhaps embody the extreme end of to-the-point speaking (at the risk of perpetuating a stereotype). At the other end is that particularly British way of saying one thing and meaning another; or, as was raised in a recent event I ran, the many possible meanings of the word ‘sorry’. Jackson defends the use of ‘filler’ words that LLMs decry, because they add nuance.

One of our participants, a PhD student in psychology, agreed: ‘I would rather prefer to have some human tone in the conversations… we are lacking the human aspect in AI language.’

Jackson also discusses the ‘voice from nowhere’, known in ethnography and elsewhere as a voice of disembodied authority, a voice of God. Actually, it remains the dominant mode in most academic writing. One participant pointed out that AI voices (as all voices) come from somewhere – the assumption with LLMs is that it’s from male engineers. This participant is indeed an engineer, and noted, ‘As engineers, we are trained to not waffle. Don’t be flowery. Use as few words as possible to get the information out. But it has given me that perspective of maybe there are other reasons we have or other ways that you would want to convey information.’ This brought our discussion to the important issue of context.

Another participant explicitly asks ChatGPT to refer to itself as ‘we’, not ‘I’.

The conversation inevitably came around to teaching, since Jackson discusses this at some length. I noted that I encourage students to use AI – but to be transparent about it, and not to use it for final work, only early drafts. Many students for whom English is not their first language find AI translation hugely useful, in the classroom and elsewhere.

We could add people in general, including native English speakers – culture and class come into it, and indeed we had a variety of English accents in the group.

Tuning AI

An interesting topic raised by one participant is how LLMs can be ‘tuned’:

‘Because they’re tuned on such a huge corpus, they can sort of be de-tuned in the in the way you have Western music – it has the 12-tone system and we have a certain type of tuning, but there are other cultures that have different types of tunings, but that’s an area that really hasn’t even been available to the regular person to sort of tune.’

This, it was noted, is known as the ‘temperature’ of an AI model: ‘ChatGPT is really not the language model. It’s like a a very finely filtered version of the language model, and it’s shaping my creative expression. It’s making me talk in ways that I don’t normally talk well – actually it can be tuned to do that.’

Many AI systems we can ‘tune’ by modifying the prompts we give them. A common tactic is to instruct a chatbot to act as an expert in a certain domain, or a particular character, or to act as if <add your condition here>. For example, one participant tried an experiment asking a chatbot to pose as a female, and got very different responses from the default ones. They then interrogated the chatbot about this, resulting in an interesting and substantive discussion.

The participant added that with some chatbots now, you can converse in more than one language, in the same conversation, and this can result in nuances.

The other side of the prompt

How much do chatbot users blindly accept the suggestions they are given? I was reminded of research done in the 1980s about the spread of US television shows to different countries. It wasn’t simply that viewers passively accepted or adopted American values – many people have some critical faculties.

The Voice from Nowhere doesn’t help, though. Nor does, as one participant pointed out, a growing anti-intellectualism, related to ‘quiet quitting’ and just doing the minimum amount of work possible to meet targets or assessment criteria.

This comes back to teaching: It was noted that teachers increasingly adopt AI tools to help assess student coursework. This includes, as I encountered recently, online MOOCs with hundreds of thousands of students that no single instructor could possibly assess themselves. This reminded me of the ‘Boring Apocalypse’ scenario: as more people use AI to generate content, more people use AI to summarize content, then generate responses, resulting in generally more content, and more work for everyone.

It seems inevitable that schools and universities need to change what and how they teach, when the assumption is that content might be AI-generated by default. Trying to fight that is an arms race. One participant pointed out that you can get AI to write your PhD thesis, but you would still fail the viva. We can hope that critical thinking still remains a necessary skill. I pointed out the ‘flipped’ learning model, where students consume a lecture (maybe via YouTube) on their own, then class time is for discussion.

One of our participants summarised nicely:

We have to have faith in the eternal inquisitiveness of humanity – even though they are the all these new shortcuts and opportunities for plagiarism, there have always been shortcuts and opportunities for plagiarism. And this is just the latest in a long line of them. There are those who just want to pass the test, or meet their targets. But we're we're still thinking beings. We just have a bigger encyclopaedia to use now.

‘I’m sorry, my mistake’

And also, there are good and creative uses of AI – I pointed out the book Pharmako AI, in which the AI says some truly profound things. And some AI models (such as Claude) are more ethical, transparent, and tentative than others. In my own experience, Claude makes mistakes for sure, but this keeps me on my toes as a programmer, pointing to the need for programmers, at least, to still have some knowledge of programming. When it works, I truly feel that ‘we’ are creating something together.

Another participant finds it frustrating when a chatbot apologises, because it doesn’t seem genuine.

Finally, one of us pointed out that Jackson pointed toward an entire research programme investigating how AI is reshaping our conceptions of subjectivity, authenticity, voice, knowledge, creation and production.


I have included direct quotes here, and I made clear to participants that I like to follow ‘Chatham House Rule’ which is that any of the participants can repeat anything they hear in this forum, as long as they don’t identify the speaker.

To top