Here is a summary of our discussion of this article.
In its form, I held up this article as a good example of a structured argument (like a dissertation or PhD thesis) – it sets out its aims and argument very clearly, breaks the argument down into coherent sections, and carefully details the evidence for each part, supported with references from academic literature.
However, I identified a flaw inherent in the argument itself. When I first encountered this article and read the title and premise, I thought, ‘Yeah, that sounds about right.’ But as I read it, it quickly became clear that the author is making a direct comparison between AI systems and humans. To me, this is an invalid and nonsensical comparison: AI systems are nothing like any kind of biological organism.
Even if we (somehow) set biology aside to compare AI and humans in terms of purely cognitive ‘intelligence’, the argument depends on the flawed premise of a shared definition of the latter. To me, the way AI systems ‘think’, if it can be called that, is not comparable to the way humans do.
Similarly, it makes no sense to speak, as the author does, of the beliefs or honesty of an AI system, as if it was an independent and autonomous being. No matter how complex and seemingly human-like they are, AI systems are technological tools created and used by people. In my view, there will always be people behind and in front of such systems – people who develop, train, inform, and use them.
Accordingly, I believe that ‘Artificial General Intelligence’ is a nonsense term in the absence of a shared definition of intelligence with humans. I do not believe that ‘AI’s’ will attain ‘consciousness’ or autonomy such that they have such intentionality and drives to eliminate humans.
The closest thing I have seen to a definition of intelligence shared by humans and AI systems is quantitative: Does a given AI system ‘know more’ than you or I do? Yes, in many cases. Does this make such a system ‘smarter’ than us?
Humans in the loop
It’s the humans you need to watch out for. The ones behind those AI systems, the ones driven by their own profit above all else. This is the real source of competition.
Relatedly, a lot of economic and evolutionary theory, as included in this article, assumes that individual agents make rational choices. Plenty of economic theory provides evidence to the contrary. Darwinian evolution has become such dogma – akin to religious doctrine that goes unquestioned and taken for granted – that for any given behaviour, some researchers seem to believe there must be some underlying evolutionary driver, and go looking for it. This is confirmation bias.
Another assumption that this article makes – as many others do – is to assume that humans are the dominant species on Earth. This is a key part of the climate change narrative: humans have altered the natural environment and atmosphere perhaps irreversibly. I don’t deny that this is the case, but it is incorrect to say that humans are dominant in number (invertebrates and micro-organisms far exceed them), or in power (a virus recently stopped human activity worldwide, another could extinguish it altogether).
One more flawed argument from the article: ‘A human is unable to modify the architecture of her brain and is limited by the size of her skull.’ Research on neuroplasticity disproves this, I believe. In addition, it is known that environmental changes can propagate to inherited genes. (If I ever write this as a formal academic paper, I will dig out references.)
The good news
For all its flaws, I believe this article is worth reading. I have come to see that no matter how you feel about AI, the best thing about it is that it is prompting us to question almost everything – to rethink and redefine what we mean by creativity, agency, work, relationships, values, the socio-technical and political systems that govern our activities. Maybe even the theory of evolution.
Accordingly, as I said at the start, this article carefully lays out and evidences a clear argument, as well as possible scenarios for how AI might play out in human societies. Even if the basic premise is flawed, many of the scenarios presented are eminently possible, thanks to the human creators, drivers enablers and users of AI systems.
In particular, the part about goal conflict in multi-agent systems is, I think, the most useful part of this paper. It highlights that ‘responsible AI’ can’t be done with simple, linear fixes.
Here for example are some other valid arguments form the article – which I attribute to human, not AI, actors:
- ‘oversight will be removed in the name of efficiency’
- ‘Competition and power-seeking may dampen the effects of safety measures’
- ‘governments granting AIs rights, like the right not to be “killed” or deactivated’
- We already know that AI systems, in their training data, can lock in antiquated values.
- As AI agents begin to understand human psychology and behavior, they may become capable of manipulating or deceiving humans (some would argue that this is already happening in algorithmic recommender systems).
Staying competitive in the marketplace today means you have to adopt, use, and to some extent rely on AI. This assumes that ‘the marketplace’ is a competitive space ruled primarily by evolutionary principles. This is increasingly the case, particularly with the current US government and the knock-on effects it has on other economies. As the article points out, ‘intense competition in a free market can result in highly successful companies that also pollute the environment or treat many of their workers poorly’.
New York Times columnist Ezra Klein has said that capitalism is itself a form of AI – one that long predates the technological version, and is enormously successful.
Competition v cooperation
I have long believed that the fundamental difference between a generally conservative mindset and a more progressive one is about care: Do you care more for yourself your in-group (family, community, nation), or do you value diversity and the common good of everyone? From a purely evolutionary perspective, the article provides evidence that diversity is more beneficial.
The article supports such a view with regard to ‘a wider range of actors utilizing their own versions of AI agents’. Another quote: ‘an army composed of warriors, nurses, and technicians would likely outperform one that only has warriors, groups of specializing agents can be more fit than groups with less variation.’
After the recent fires in Los Angeles, I saw a news report. A couple had built a fireproof home, and it withstood the fire, while every other house on the block burned down. But they were not happy: they had lost their neighbourhood – one of the things that made living there worthwhile.
It’s like survivalists building bunkers to wait out the impending apocalypse. Do you want to live for months or years in isolation on canned food and artificial light and air? Might it now be better to build something better with your neighbours (near and far) to build a resilient community, or even beat back the apocalypse?
Competence without comprehension
That’s a great phrase – it comes from a talk by philosopher Daniel Dennett, who is cited in the article and was a keen observer of evolution. Dennett is referring directly to biological evolution with that phrase, and comparing it to ‘intelligent’ design as undertaken by humans.
To me, the phrase perfectly describes how AI systems work. But it doesn’t mean we can equate them with biological systems – in fact, Dennett was differentiating the ‘natural’ from the ‘artificial’, and I would place AI firmly on the artificial side, as the name suggests. That’s the human-controlled side.
I don’t deny that some evolutionary principles can apply purely to ideas: I read Dawkins’ Selfish Gene as an anthropology undergraduate and bought into his meme theory long before memes became a thing on the internet.
Relax.
One final objection to the paper, and to AI hopes and fears in general. A lot of people these days equate ‘the world’ or ‘reality’ with what happens on screen. Love it or hate it, AI still (mostly) exists in digital space. It might gain (via humans) the ability to control robots and other systems that affect the nondigital world.
But in popular and academic discourse, there is a tendency to forget that a world exists outside of the screen. Yes, many of us are glued to most of the time. Yes, I work in a research centre that assumes a condition where digital technologies have become central to our identities, interactions and infrastructures. But there is another world outside the screen. However bad AI gets, we can always step away, or pull the plug.
If someone is convinced by an AI chatbot to do something bad, I might suggest that perhaps it is time for that person to step away from the screen. If someone gives an AI system sufficient autonomy to do harm, I would like that individual to be held personally accountable.