26 Sep 2025: Speculative Philosophy of Planetary Computation

New technology requires new philosophy, says Benjamin Bratton – especially what he calls ‘existential technologies’ as opposed to instrumental ones: those that prompt us to question so many of our basic concepts of reality. ‘[T]he artificialization of intelligence,’ he suggests, ‘will teach us much more about what thinking is than we will teach the machines.’

Central to this idea of existential technologies is what Bratton calls planetary computation: a ‘computational epidermis of satellites and sensors and various mechanisms by which it comes to communicate with itself.’ He detailed this previously in his book The Stack. This stack, he says, ‘is being replaced, a bit like Theseus’s boat, layer by layer, by one that is dependent not on classical computing architectures, but on neuromorphic architectures’. Specifically, planetary computation means the a systems view in which the planet learns about itself and its processes.

I was skeptcial about this: this technological layer built atop the planet is entirely created by humans. But, if you consider humans as part of nature, is it so separate? It depends on your view of whether humans are altering the planet for their own ends, or whether such a development is evolutionarily ‘inevitable’.

Bratton states that when humans began to realise the agency we have over the planet was ‘also the moment in which we figured out that it’s on fire’. We woke up, he says, ‘in a house on fire, that is on fire because we woke up’.

The paradox of intelligence

Intelligence, as defined by Bratton, is ‘something that our planet does’ over evolutionary time – specifically, folding matter into particular shapes to make, for example, primate brains, and ‘through this folding, it was able to perceive things about its own processes that it would not have been able to understand otherwise’. Human primates then learned how to make and use fire and electricity, he says, and eventually how to fold metals and minerals into complex shapes to enable what we now call artificial intelligence.

This raises the question: If each level of intelligence acts as a scaffold for the next, then what might AI be a scaffold for? What comes next? ‘This isn’t the end of the story,’ he says. ‘This is hopefully the midway point.’

Reflectionism

Given the above, Bratton puts forth the controversial view that AI should not be so human-centred, because if intelligence is viewed broadly beyond the human, a human-centred view might hinder a truly new form. ‘[A]lignment, as they say, between AI and human society must be bidirectional. It’s not just a matter of how AIs must adhere to what is suspiciously called human values, but also how societies adapt to the reality that inanimate objects are capable of complex cognition.’

He likens our relation to AI as two characters sitting opposite each other, ‘deciding whether the other is a mirror reflection or a true opposite, each is supposedly the measure and limit of the other.’

In order to facilitate the evolution of greater forms of intelligence, then, means getting AI out of the black box, off of the screen and into the world. Since ‘natural intelligence’ evolved in interaction with its environment, ‘we should look for ways in which machine intelligence will evolve in the present and future through open worlds as well’. We might imagine that such worlds could be virtual or real.

‘This represents an infrastructuralization of AI but also a “making cognitive” of both new and legacy infrastructures. These are capable of responding to us, to the world, and to each other in ways we recognize as embedded and networked cognition. AI is physicalized, from user interfaces on the surface of handheld devices to deep below the built environment.’

To top