17 Comments
User's avatar
James Torre's avatar

> The Landian strategy corresponds roughly to this: instead of playing games (in a very general, abstract sense) in accordance with a utility function predetermined by some allegedly transcendent rule, look at the collection of all of the games you can play, and all of the actions you can take, then reverse-engineer a utility function that is most consistent with your observations. This lets one not refute, but reject and circumvent the is-ought problem, and indeed seems to be deeply related to what connectionist systems, our current best bet for “AGI”, are actually doing.

> Not only can this insight be granted a formal treatment, but it also provides a deeper look into, and support for AI = Capitalism since in this way one attains the emergence of some kind of prescriptive dimension from a descriptive basis, suggesting how means-ends-reversal can take place and why (the “gravitational pull” of intelligence optimization as abstract general competence)

Appeal to autoencoder is a clever technique, but I am skeptical it achieves full independence from received truth, because notions of consistency are themselves a parameter subject to variation - why examine one's collection of games with an eye towards consistency as opposed to paraconsistency, for instance, if not for an _a priori_ commitment to the former?

The atheism of perfect (or maximal, if that is all which is attainable) naturalism is adamantine, and omni-dissolutive; the relativity of consistency merely corroborates this with deeply foundational attestation. Eschewing God achieves a genius of seeing, through a lens of otherwise inaccessibly clarity, across the landscape, but one which is nevertheless restricted by the absence of irruption - itself the ultimate of your _lignes_.

> The accelerationist perspective has something to say about nearly anything, one is free to use it to play around with whatever excites one’s curiosity, though some topics promise a much better return on investment of effort than others.

Given the above, and this invitation to idiosyncrasy, I will note that characterizing consistency space, and determining how any one point bears or overbears upon others, is directly within the remit of Autarkic Formal Systems research.

Expand full comment
|||||'s avatar

Total independence from any received truths is a questionable objective in the first place, since horribly intractable, even systemically, much less at individual or component level, the emphasis is on disloyalty to such at the top or esoteric level, so to speak. Of course, why should not that tentative minimal principle also be subject to disloyalty? In practice I certainly expect it will be, as it most certainly has been, perhaps one of its greatest strengths, at least as far as providing food for thought goes. There are some interesting potential paradoxes around here to be discussed down the line and in other articles, such as the eventual one on duration and the body of absolute nothingness.

As for consistency, if anyone figures out how to make a self-driving *paraconsistent* car that others would be willing to buy, more power to them! The field is open. Naturally, what is at stake here is formatting, and if I gave the impression of some kind of total convergence and crystallization, that is a fault on my part. I believe that the difficulties raised by the anti-orthogonalist, or what we might call connectionist, strategy admit no such thing.

The term naturalism would require some latitude in order to apply, if it simply means no appeal to the supernatural, OK, but games are terrifyingly general and flexible objects with no more than a very loose connection to any external reality required, so this would be an often very abstract and elusive Nature subject to constant mutual disputations.

Expand full comment
James Torre's avatar

> As for consistency, if anyone figures out how to make a self-driving *paraconsistent* car that others would be willing to buy, more power to them!

This is indeed the default, insofar as hypotheses are admitted and acted upon without demanding invariance in an originary, perfect intra-mutual integrity of the components of a system's world model. More broadly, paraconsistency is an example of the many species of systemic properties that can be identified once "consistency" is investigated and decomposed, and this decomposition becomes more relevant the more an argument appeals to mathematics as its justification.

> if it simply means no appeal to the supernatural, OK

Yes, with especial emphasis on the thoroughness with which such circumscription (so sharp as to seem sheer, and given the poetic image of an abyss) must be beholden.

Expand full comment
Kevin McCoy's avatar

New reader here. Looking forward to pt 2. I like the “pragmatic” approach to your term concrete expressivity. The challenge is not to have ghosts of humanism policing the edges of what is recognized as expressive. Acc by definition is going to (eventually) speak a language we don’t even hear or recognize

Expand full comment
f_d's avatar
Mar 26Edited

>This class of problems seems to be no closer to resolution than it was a century ago, so what are we to do? The Landian strategy corresponds roughly to this: instead of playing games (in a very general, abstract sense) in accordance with a utility function predetermined by some allegedly transcendent rule, look at the collection of all of the games you can play, and all of the actions you can take, then reverse-engineer a utility function that is most consistent with your observations. This lets one not refute, but reject and circumvent the is-ought problem, and indeed seems to be deeply related to what connectionist systems, our current best bet for “AGI”, are actually doing.

What do you mean here, some kind of autocomplete extrapolation from present conditions? LARPing as who you think you are based on the information about yourself available to yourself? If so, how do you account for personal growth and the 'true fronteer' of human thought.

Expand full comment
|||||'s avatar

In order to cope with natural demands, any organism is already in a minimal sense, an idea of itself, that is, there is no static and fixed blueprint of self that they adhere to, but a network of interacting dynamics to achieve allostasis. The point here is that the orientation of action can be at least partly based on more objective relations with one's environment. However, this is also a very complex problem, indeed impossible to perfectly solve (e.g. the Frame Problem) and that's the room for subjective determination, but even if this is a partial solution it attains a better order of operations and feedback between them, partly because this also reforges the environment to make its presentation of this problem more tractable.

Expand full comment
saul's avatar

Haven't seen the accelerationists engage the energy question,there wouldn't have been no industrial revolution without the revolution in energy right?,or is there any abiotic oil hopium?

Expand full comment
|||||'s avatar

Most people interested in it seem to not have a STEM background, so that's not too surprising. Are you talking about peak oil or more generally? Certainly we are not using nuclear anywhere near as much as we should, which makes the claim that Capital is puppeteering absolutely everything at all times much more dubious.

Expand full comment
saul's avatar

Yeah peak oil is what's on my mind cos I just watched this documentary called collapse featuring Michael ruppert and his conclusion is that modern civilization is bound to collapse cos of the energy crisis and it made me think about all the cyberpunk ai futurist fanfiction cos the very fundamental question of energy is a neglected a lot in these spaces

Expand full comment
dotyloykpot's avatar

So unlike unconditional acceleration, which focused on the lack of human agency in the feedback loop, in this variant of r/acc you are proposing human agency through generalized game winning? If not, this starts to sound a lot more like an u/acc than an r/acc.

I would also note that your definition has a different aesthetic than e/acc, which tend to focus on human collective action to acheive scifi projects like space travel, mars colonies, or fusion power. Because if the focus is on general game winning and competition then it's a much more fragmented future you're presenting. If not then this might be more of an e/acc than an r/acc.

R/acc as you're presenting here is a bit tricky because you have to thread between u/acc and e/acc. It's very difficult to say that humans still have agency yet will not use that agency. towards collectivism. Thats why common parlance usually puts r/acc as the mirror of l/acc; collectivism through hierarchy instead of through equality.

Expand full comment
|||||'s avatar

It's natural that people talk about agency but that's a contextual, extended and interactive property, therefore of unbounded potential complexity. So I prefer to think about autonomy first, and that's much more tractable, if still difficult.

I'll go more into this on the last piece of this series but I simply do not see a way for acceleration to be autonomous enough to foreclose agency, at least not an honest way, without metaphysical counterfeiting. That doesn't mean humans are left with anything satisfactory, necessarily.

E/acc is a kind of mentally castrated mutation of some r/acc basics for LinkedIn grifting while the conclusions of u/acc are roughly r/acc's starting point, so the resemblance is not accidental, but the differences are consequential. It's not a mirror to l/acc because that posits a disconnect between production and capital, something they have never bothered to substantiate.

Expand full comment
zan's avatar

music to my eyes

Expand full comment
Devaraj Sandberg's avatar

And yet AI=Capitalism=Intelligence is visibly hitting problems... It's more attractive to capitalism to scale and hype LLMs than to invest the megabucks needed to develop the actual load bearing structures from which AGI might realistically emerge. Future superintelligence might look more like dumbed-down humanity rather than AGI 1.1.

Expand full comment
Lumpen Space Princeps's avatar

it's not real agi! it's not real agi!

i scream as i am slowly turned into paperclips

Expand full comment
Devaraj Sandberg's avatar

Our capacity to envision the future, and be shocked by the fangs of the unseen, mobilises libidinal energy. So I guess the paperclip maximiser is better than agi anyway.

Expand full comment
reiwamale's avatar

🐶🧠

Expand full comment
Arkacandra Jayasimha's avatar

GNON knows we need more newness in Accelerationist theory. I'm looking forward to seeing where this goes, especially considering my own work is pretty heavily enmeshed in the older strands and past of this thinking.

Expand full comment