Runaway Success?


Today's Financial Times editorial comments on what appears to be an impending modern gold rush, with tech Über corps such as Amazon and Google in the vanguard of the development and deployment of 'small modular [nuclear] reactors' or SMRs, to provide the power that their increasing use of AI will require, and which is unlikely to be available to them from national grids, with their huge demand for electricity proving an unsupportable burden on domestic supply and systems already throttling back on carbon. Instead they are looking toward SMRs to provide their energy needs on a decentralised [private] basis. All well and good you might think, given that nuclear energy is inherently low emission at a normal operating level, theoretically cheaper and long-serving, and is very energy-dense in nature: more bang for your buck for longer, one might say.

But the problem with nuclear is a two-fold one: any reactor system yet devised inherently sows the seeds of its own destruction via the acceleration of materials failure caused by the intense radiation that goes with the technology: they don't tend to have long safe operating lives, and the consequences of failure are serious and are well exampled in the - admittedly small number of - serious reactor failures we've seen over the last seventy years. Each and every one of those failures was extremely serious with very long-term consequences; the potential for catastrophic events to occur is ever present with this technology: never mind the huge quantities of nuclear waste that are produced in the normal course of operation, and a considerable portion of the very worst of that waste will still be a problem for our descendants, 10,000 years hence.

If it is imagined that the smaller scale of the SMR in some way militates and somehow scales down the level of potential catastrophe inherent in the system; I'd simply say this. A runaway reactor or meltdown is serious at any scale, and the operation of multiple, locally-based reactors actually upscales the disaster potential by dint of ubiquity. It's not a great prospect. The FT piece finishes in a tone of what could euphemistically be called hope against hope: 'There is also scope to use AI to improve energy efficiency in factories, offices and across grids. In the new age of electricity, AI must be not just another energy-hungry mouth to feed, but a central part of the green solution.' Best of luck with that one, given the current performance of the technology and the commercial desires and ambitions of the capitalists who are intent on us heading in this direction.

Two things are self-evident. One is that we are letting AI learn from us - humans - that notoriously flawed, venal and fickle species of higher ape; but without any internal guiding moral compass at all built in. Two is that the need for energy to power the AI to mediate the power consumption it demands increases commensurately with its increasing deployment in the process itself: it's a totally circular process that can only spiral out of control exponentially. It's the positive feedback loop from hell [cf blog posts passim on such horrors]. As for letting AI control the generation of that [nuclear] power to feed its own maw? God alone knows what could transpire. I'll leave you to fill in the gaps beyond this ellipsis...

Comments

  1. South Sea Bubble mate.
    Making the waste problem tertiary is an inversion of the actual risks!
    ATB
    Joe

    ReplyDelete

Post a Comment

Popular posts from this blog

Of Feedback & Wobbles

A Time of Connection

Sister Ray