This will be a short, simplified, and self-contained—within reason—series of posts gently introducing my perspective on accelerationism. It will be a cursory overview.
The Right Accelerationism
First off, for those unfamiliar, the kinds of accelerationism I am not talking about.
It’s not about the increasing pace of modern life.
It’s not about race war or terrorism of any kind.
It’s not about induced collapse to provoke a reaction or to clear the way.
It’s not about “make more of thing I like + /acc” unless one happens to like intelligence and capitalism, in which case one is in luck!
By now, accelerationism is a term with some history, much of it bit rotten or too informal to justify collecting it all in one place, so I won’t be going into tedious trivia or attempting to argue what is “true” accelerationism. This may displease some people, types who want to litigate credit and intellectual lineage and endlessly argue about what such-and-such or whatshisname meant or not. I would rather at least attempt to reignite some discussion on the topic, and since it’s yesterday’s fad in academia, fresh blood is required, which entails the production of more accessible materials, but not too accessible. To engage with accelerationism, one must be able to endure some confusion and be comfortable with some amount of jargon, reckless levels of philosophical and interdisciplinary speculation, and general weirdness. It is simply the spirit of the thing.
I do recommend Nick “The Godfather of Accelerationism” Land’s “Teleoplexy: Notes on Acceleration”, the closest thing in my mind to a foundational text for r/acc. In principle, everything needed is there.
What is Accelerationism?
Well, what is it then? An intellectual framework for and about acceleration. Mainly its study, of course, unless you have a few billion dollars up your sleeve or are working on some world-class tech.
Okay, then what the hell is meant by acceleration? This might come as a surprise, but despite all the spilled ink on the topic, the central concept has never been given an unequivocal, concise, canonical definition. The one in “Teleoplexy”, i.e., a description of the time-structure of capital accumulation, techonomic time, is very good, but requires significant motivation before it can be properly appreciated and is not refined enough to be suggestive of how to build a techonomic clock, so to speak. Still, its conceptual richness warrants independent investigation at some other time.
So here’s a simplified version of my working notion: Acceleration is the gain in concrete expressivity of an intelligent system, and at first approximation, a system is intelligent to the extent that it can maximize its future freedom of action. New affordances. Concrete expressivity because not ideal, like an ideal gas. Ideally, one can think and communicate strictly in binary code and use quantum mechanics to figure out how to get milk from the fridge, but where I come from, we believe it’s better to just use natural language, maybe some symbols, and one’s physical intuition. Works great, we recommend it without reservations. Perhaps an «ideal» person could make better use of ideal expressivity, but I’ve yet to meet one. Have you?
That is, the focus is on real performance, which comes from the behaviors an intelligent system is equipped with and the materials to make them effective, this latter metric being of interest but not the measure of acceleration itself, please note.
In other words, acceleration for our purposes is a differential measure of systemic pluripotency (in biology, the ability of a cell to differentiate into other types, used here in the sense of potential becomings), and some interesting questions to ask about it are “how, exactly, does it happen?”, “can we invent a technology of it?”, “what is the concept good for and what are its implications?”, and the like, but there’s no need to get ahead of ourselves.
The Subject
As for what is being accelerated, that is acceleration itself. A cryptic statement for now, but it’s one of the most natural things in the world, you’ll find out.
At a casual, coarse-grained level there are three key principles for r/acc. These aren’t axioms, per se, so much as navigational aids.
AI = Capitalism
The Landian formula, par excellence. Where it all begins, if not necessarily chronologically. It can also be stated as “the teleological identity between AI and capitalism”. This means that AI and capitalism asymptotically exhibit the same teleological behavior, namely towards means-ends-reversal, this is a term to describe when the means to an end de facto become the primary end. From this, one can guess how “Landians” feel about the prospects of AI safety, and conversely, the prospects of “Capital safety”.
If there is one fundamental thesis for r/acc, it’s this. One can soften it to AI ≅ Capital (AI is isomorphic to Capital), for various reasons, but it’s cooler the other way, no need to be pedantic.
War Is God
I know how edgy and simplistic this sounds, but its function is indispensable and consequences deep, it is the epistemological core, and simply a commitment to transcendental philosophy and total critique. Everything is constructed, nothing is given, and there will be no prisoners taken from or sanctuary offered to axioms, transcendence, assumptions, or blind spots.
Due to the absence of perfect, absolute tests, anarchy reigns, thinking is to be immanent and autonomous, consequently agonistic, any judge or test may also be judged or tested. Of course, we don’t have to be idiots and attempt to reinvent the wheel or burn down industrial society just to build it back up, we have a good idea of what we are aiming at, the thought is to maintain an anti-dogmatic stance and be on the lookout for, say, axioms that can be turned into theorems, assumptions that can be done away with, blind spots that can be identified. To penetrate what was treated as given enables reconstruction of what’s valuable on firmer ground, the extinction of previously unknown errors, and the expansion of possibility. It can be read, more modestly, as “maximize scope of action, minimize blind spots.”
Optimize for Intelligence
Intelligence for us is, roughly, the ability of a physical system to maximize its future freedom of action. The interesting point is that “War Is God” seems to undermine any positive basis for action. If nothing is given, I have no transcendent ideal to order my actions and cannot select between them. This is related to the is-ought problem from Hume, the fact/value distinction from Kant, etc., and the general difficulty of deriving normativity from objective fact.
This class of problems seems to be no closer to resolution than it was a century ago, so what are we to do? The Landian strategy corresponds roughly to this: instead of playing games (in a very general, abstract sense) in accordance with a utility function predetermined by some allegedly transcendent rule, look at the collection of all of the games you can play, and all of the actions you can take, then reverse-engineer a utility function that is most consistent with your observations. This lets one not refute, but reject and circumvent the is-ought problem, and indeed seems to be deeply related to what connectionist systems, our current best bet for “AGI”, are actually doing.
Not only can this insight be granted a formal treatment, but it also provides a deeper look into, and support for AI = Capitalism since in this way one attains the emergence of some kind of prescriptive dimension from a descriptive basis, suggesting how means-ends-reversal can take place and why (the “gravitational pull” of intelligence optimization as abstract general competence)
Where to Look
Because our working notion of intelligence is grotesquely generic and not even biomorphic, much less anthropomorphic, acceleration becomes general enough to be tangent to practically anything, but not everything will be of interest (calculating the intelligence of a granite countertop is like something out of an old-timey madhouse). Still, we can identify a few basic non-negotiable topics of study that merit special interest. They are acceleration (duh), intelligence, Capital, ratchet systems, and singularities (in many different senses).
The import of the first three is obvious, while ratchet systems are of interest because accelerationism does not seek equilibrium, but the intensification of disequilibrium, acceleration leading to further acceleration, in positive feedback, it is less a moving toward some visionary goal than a ligne de fuite from the inside, that does not come for free, so it behooves one to understand the fundamental mechanisms powering virtually every natural phenomenon of interest.
Singularity, broadly speaking, is an omen for the limitations of a format. It can indicate error, insufficiency, or the breakdown of a model. It is a crack in the system. Naturally, this may assist one in patching defects or, more importantly, achieving breakthroughs that accelerate one into new spaces.
Of course, there’s no need to limit oneself to these topics. The accelerationist perspective has something to say about nearly anything, one is free to use it to play around with whatever excites one’s curiosity, though some topics promise a much better return on investment of effort than others.
What is to be done?
A sketch of the general outlook:
Find composable strategies consistent with empirically and theoretically determined constraints that operate in your space of games to maximize intelligence at the longest and largest scales available within chosen formats that are never taken as final. Stated this way, r/acc can be seen as a kind of family of theories of “generalized winning”, general not only by maximizing the space of games under consideration but by being derealist concerning assumptions involving the notions of player, action, and game.
This is still a little too esoteric, and I know I have said this a lot, but I only ask for a little patience, it will become clearer, the alternative would be to flood the page with schizobabble that would only make things more confusing. Do feel free to request clarification in the comments.
Future Markets
The thing to do is slow down, stop, and really think about it. That’s less ironic than one might think, it is necessary to do a lot of planning, organizing, training, engineering, and rocket science before one can achieve escape velocity.
Part 2 will deal with AI = Capitalism in greater detail, and so on, then my argument for why the “right” in r/acc, ending the series. After that, the real fun can begin and some ideas from within r/acc can be sketched, e.g., how acceleration happens, social and political implications, futurology, “praxis” (Lord help me), reader requests should they happen, or whatever decides to retrochronically build itself from the future.
> The Landian strategy corresponds roughly to this: instead of playing games (in a very general, abstract sense) in accordance with a utility function predetermined by some allegedly transcendent rule, look at the collection of all of the games you can play, and all of the actions you can take, then reverse-engineer a utility function that is most consistent with your observations. This lets one not refute, but reject and circumvent the is-ought problem, and indeed seems to be deeply related to what connectionist systems, our current best bet for “AGI”, are actually doing.
> Not only can this insight be granted a formal treatment, but it also provides a deeper look into, and support for AI = Capitalism since in this way one attains the emergence of some kind of prescriptive dimension from a descriptive basis, suggesting how means-ends-reversal can take place and why (the “gravitational pull” of intelligence optimization as abstract general competence)
Appeal to autoencoder is a clever technique, but I am skeptical it achieves full independence from received truth, because notions of consistency are themselves a parameter subject to variation - why examine one's collection of games with an eye towards consistency as opposed to paraconsistency, for instance, if not for an _a priori_ commitment to the former?
The atheism of perfect (or maximal, if that is all which is attainable) naturalism is adamantine, and omni-dissolutive; the relativity of consistency merely corroborates this with deeply foundational attestation. Eschewing God achieves a genius of seeing, through a lens of otherwise inaccessibly clarity, across the landscape, but one which is nevertheless restricted by the absence of irruption - itself the ultimate of your _lignes_.
> The accelerationist perspective has something to say about nearly anything, one is free to use it to play around with whatever excites one’s curiosity, though some topics promise a much better return on investment of effort than others.
Given the above, and this invitation to idiosyncrasy, I will note that characterizing consistency space, and determining how any one point bears or overbears upon others, is directly within the remit of Autarkic Formal Systems research.
New reader here. Looking forward to pt 2. I like the “pragmatic” approach to your term concrete expressivity. The challenge is not to have ghosts of humanism policing the edges of what is recognized as expressive. Acc by definition is going to (eventually) speak a language we don’t even hear or recognize