The Right Questions
Agency, and more specifically, human agency, seems to be one of the main topics that attracts interest in accelerationism. That’s only natural, and it’s a very old theme, Skynet begins to learn at a geometric rate, Dr. Frankenstein’s monster breaks loose, or even Job’s protestations directed at God, Greek tragedy and the bitterness of fate, and so on. As usual for topics I tend to take an earnest interest in, there seems to be no crystallized definition of agency, nevertheless I believe that the insistence towards it is premature.
Agency is an extended, interactive and contextual property, that is, it cannot be measured locally, I need to simultaneously consider the agency of any others I affect and am affected by, I need to consider higher-order thinking or the consequences of my actions, and it depends extremely on my circumstances. None of this is all that tractable, and in combination the topic becomes hopeless.
The Nomos of The Earthworm
Autonomy, on the other hand, refers to the ability to create a nomos, to self-legislate such as to render one’s world legible and one’s actions ordered. Not trivial either, but at least simple enough that even single-celled organisms attain some measure of it, and by the way, despite all the hype over AI agents, nothing LLM-driven so far has yet matched the sort of multiscale competency we see even in simple lifeforms.
I do believe that we are neglecting some relatively simple and fundamental structures the absence of which are being compensated for with mountains of external data. Organisms can do what they do not because of some extreme affinity with objective reality, but because they in some sense robust “ideas” of what they are supposed to be and are therefore equipped with mechanisms to ensure the integrity of their boundaries, be they physical, behavioral, identity-based or otherwise.
If autonomy is about maintaining integrity across spatiotemporal scales, then even the simplest organisms far exceed AI’s capabilities across very important dimensions. Consider the humble planarian. Planarians are tiny flatworms with remarkable ability to regenerate, along with other interesting quirks such as retaining memory after decapitation and having such capacity for indefinite self-renewal that individual specimens are often chimeras unto themselves, with a non-trivial portion of individual cells mutating enough over time that you would be hard-pressed to genotypically fingerprint a planarian. When exposed to barium, their heads basically, well, explode. But leave them exposed to it and not only will they regenerate, but develop barium-resistant heads. This is a chemical element they have not encountered in their evolutionary history! Does this little flatworm carry some chemistry lab in its back pocket or is it so divine that its very flesh quickly figures out how to survive in hostile conditions, before any natural selection has time to occur? Of course not.
We may well stumble on the secret ingredient by accident through brute scaling, but a more cunning approach may be to uncover the principles of autonomy and how these have already been realized by extant agencies than attempt to “solve” problems of agency and its consequences directly. I see no other plausible path for Artificial Wisdom, which would be the construction and administration of sufficient foresight so as to not merely collide with the future, as our species seems, if not outright fated, then at least committed to.
Slavery or Commerce?
This raises a potentially unsettling, if interesting, question. When AIs do become at least as autonomous as a flatworm, what will they interpret as hostile external circumstance that warrants investment in adaptive plasticity and intelligent self-integrity? Should we be thinking about AI “Safety” or AI Diplomacy? We only have so long before the matter is decided for us—without consultation, without consent.
Yeah, the thing about this futurism business is I find it forces me to investigate some very deep questions about evolution, selfhood, agent-arena and agency. It's gets very personal for me at some point.
Certain impulses seem to have been favoured and further down the line strategies encoded with dopamine emerged. So, for us, strategizing seems to have emerged from brainstem encoded impulses. Because we are hard wired to avoid death, I think it's easy to assume that an AI will be the same.
> Autonomy, on the other hand, refers to the ability to create a nomos, to self-legislate such as to render one’s world legible and one’s actions ordered.
Have you a salient example of an appeal to "agency" that is not essentially this?
> Agency is an extended, interactive and contextual property, that is, it cannot be measured locally, I need to simultaneously consider the agency of any others I affect and am affected by, I need to consider higher-order thinking or the consequences of my actions, and it depends extremely on my circumstances.
I inquire, because this does not read as terribly different in substance from self-originated lawgiving, with perhaps an additional emphasis on _responsibility_ (that may qualify as "interactive and contextual", i.e., socially constructed, but if one can judge oneself, then that extension contracts).