28 Mar 2025: Exploring Artificial Wisdom

International Psychogeriatrics journal

This article from 2020 exposed its age, and its focus on aging (specifically geriatric medicine). For example, they note ‘clinical decision-making requires more than intelligent thinking – it requires wise thinking that incorporates ethical and moral considerations.’ Accordingly, the authors scientific framing: ‘Human wisdom is a scientific construct supported by empirical research during the last 45 years. It has a scientifically based definition, validated measurement scales, underlying neurobiology and genetics, and relationship to aging that are all quite different from those for human intelligence.’

They are not entirely pessimistic: ‘Technology can enhance moral, ethical, and pragmatic decision-making through facilitating instantaneous feedback from trusted advisers, or gathering input from, and disseminating data to, large numbers of people at once.’

The fact that none of us were aware of the term ‘artificial wisdom’ (AW) shows that it has not so far taken hold in popular culture. But we found value in its framing of emotional intelligence. One of our participants said, ‘I think by introducing the term wisdom, it’s another way of saying intelligence alone is not sufficient for critical decisions.’ Empathy, compassion, quality of life are all issues important to healthy aging.

Someone else added that maybe AI and AW are not separate things, as the authors frame them. Maybe we haven’t heard of AW because it’s simply a more advanced version of AI that wasn’t available in 2020? Evidence is in their proposed assessment of AI versus AW: they refer to the Turing test, which is now known to be limited and outdated in evaluating ‘intelligence’. Does the term ‘wisdom’ help us at all? Maybe separating intelligence and wisdom at least starts to add some nuance. (It reminds me of the distinction I made in the MA programme that I ran, between ‘information’ as quantitative data and ‘experience’ as sensory and embodied.)

More directly related to AI, it was observed that when you prompt a chatbot with emotional language – ‘Can you please help me with this? 
It is very important for my career’ – you get better results. Relatedly, the more trauma and anxiety you throw at a chatbot, the more anxiety it will show – making them hugely unsuited for therapeutic applications. Antonio Damasio’s book Descartes’ Error was mentioned: emotions come into play even when we make supposedly rational decisions, and emotions are fundamentally embodied

But when we start talking about wisdom and ethics, it becomes about whose ethics? Whose values do you introduce into a system? Happi.ai was mentioned – the founder’s approach and ethics are explicitly reflected in the app.

For a rather scientific article, there are some broad, unsupported claims, such as, ‘AI is superior in some aspects to human intelligence such as visuospatial processing speed and pattern recognition, but lags in terms of reasoning, new skill learning, and creativity.’ This assumes a singular definition of both AI and human intelligence, as well as creativity. Within areas such as visuospatial processing and recognition, I believe there are some nuances (humans I think still can spot visual patterns quicker than AI).

All of which does at least point to the limitations of ‘intelligence’. I like this quote from Justin Gregg’s book If Nietzsche Were a Narwhal: ‘If we look at intelligence from an evolutionary perspective, there’s every reason to believe that complex thought, in all its forms throughout the animal kingdom, is often a liability.’

While ‘artificial wisdom’ hasn’t taken off, the world is catching up to what geriontology has been saying for a while, which the authors repeat here:

‘If we think about consumer-level AW applications, most people will likely have a companion robot (similar to the current ubiquity of smartphones) in the future. Once AW agents are developed, they can be delivered by various other forms (e.g., walls of homes, eye glasses, hearing aids) to support human well-being. Social robots will be designed to interact with users and proactively communicate with them’

This comes back to the idea that emotions are embodied, and that AI won’t get very far without a body. More than simply reading and conveying emotions, the body holds memory, trauma, and more. We might react quite differently if you get hit by the door on the way out, versus getting hit by your sister – there’s an emotional reaction to physical stimuli in this way. And conversely, emotions have physical and chemical manifestations: you can feel it in your gut. Then, bringing in for example epigenetics, the body is inextricably connected to the environment (see Deleuze, Coccia).

More generally, my general belief is that it is not valid to even compare humans and AI – machines are nothing like humans. BUT: someone else pointed out that if AI can mimic humans (in some ways) convincingly enough, Turing test-style, does it matter? For ‘creativity’ for example: if people believe something is ‘inventive’, does it matter who/what made it? For narrowly defined domains, is AI human enough? Perhaps it is creative in one medium; does it need to be creative in all? (Here I’m reminded of J.J. Gibson’s environmental psychology, viewing the physical world in terms of surfaces, substances and medium, where ‘medium’ in the human physical world is air.)

The flip side: AI might make a beautiful painting or piece of music, but what’s (maybe) missing is the intent, the concept, the story behind it. The ‘death of the author’ in postmodernity notwithstanding – meaning is in the reader/viewer’s interpretation. Could AI generate a ‘paper trail’ or back story that is convincing enough? There might be nothing behind the eyes – or no eyes at all! Is this ’emotional intelligence’ or manipulation? Does it matter that you could never meet ‘them’ in physical form? What if you interact with most of your friends online anyway? What about a video avatar that is completely realistic? Platforms (perhaps including MS Teams, which we use for this reading group) are collecting lots of ‘multimodal’ data to make agents more realistic.

Relatedly, someone mentioned that if such a platform was collecting and saving all this content, our views about AI might be held against us in the future. One participant cheekily suggested reading the entirety of Terms and Conditions as a spiritual practice.

Conversely, someone pointed out that they see (young) people every day manipulating their faces and bodies to be less expressive (Botox) and more artificial/idealised. More like GenAI, in other words, while GenAI approaches realism. In the new normal, when parents happily sit their children with an iPad (10 years ago), or AI (now), does this approach Spielberg’s vision of A.I. (in the film of the same name) that grows up with you. Covid provided a nice precedent for training children as much as training AI, in/fromt online interactions. The authors of the article seem to suggest something similar with their notion of AW. To quote Matt Dryhurst/Holly Herndon/Serpentine, All Media is Training Data.

Consider: recent research from OpenAI that people feel more connected to chatbots with only text, no voice. (The extent of how many people do or should feel connected with chatbots is another issue.)

To me the biggest failing with this article, as with so many pre-ChatGPT, is that it failed to engage with the economic context surrounding technology, AI in particular. We might have robots or even artificial wisdom, but who pays and who profits?


At the beginning of each session, I make clear that it’s a safe space for discussion, and also that I like to follow ‘Chatham House Rule’ which stipulates that anyone can repeat what they learned in the session, as long as they don’t identify the speaker. My summary here reflects this.

To top