I think the point is that there’s nothing hazardous inherent in its nature, and pointing to the problematic uses under capitalism isn’t any more a description of ‘its nature’ than is pointing to an ass a description of a chair’s nature.
AI is a tool, just like any other, and the harm caused by that tool is largely defined by how it’s used and by who.
There’s no doubt that LLM’s and other generative models are disruptive, but suggesting that they are inherently harmful assumes that the things and systems they are disrupting aren’t themselves harmful.
Most of what you’re pointing to as harm caused by AI is far more attributable to the systems it exists in (including and especially capitalism) and not the models themselves. The only issue that I can see with AI inherently is its energy demand - but if we’re looking at energy consumption broadly then we’d be forced to look at the energy consumption of capitalism and consumerism under capitalism, too.
I imagine the sentiment here would be wildly different if we were scrutinizing the energy demand of gaming on a modern GPU.
Sure, but Abigail wasn’t really advocating against transhumanism or technology generally… The critique of that video is that technology isn’t really the focus of the disagreement between transhuminism and anti-transhumanism, but rather the ‘dressing’ around a deeper phenomenological belief (for transhumanists it’s the belief that technology will save us from the inequity and suffering created under capitalism, and for anti-transhumanists it’s the belief that technology and progress will subvert the ‘natural’ order of things and we must reject it in favor of tradition). Both arguments distract from what is arguably the more pressing issue - namely that technology does nothing to correct the contradictions of capital and it may even work to accelerate its collapse.
I would really enjoy a discussion about how AI might shape our experience as humans - and how that might be good or bad depending - but instead we’re stuck in this other conversation about how AI might save us from the toils of labor (despite centuries of technological progress having never brought us any closer to liberation) vs how it might be a Trojan horse and we need to return to a pre-AI existence.
It might be more productive for you to argue the case for why the effects or harm you’re pointing to are somehow ‘inherent’ to AI itself and not symptoms of capitalism exacerbated by AI.
I think the point is that there’s nothing hazardous inherent in its nature, and pointing to the problematic uses under capitalism isn’t any more a description of ‘its nature’ than is pointing to an ass a description of a chair’s nature.
AI is a tool, just like any other, and the harm caused by that tool is largely defined by how it’s used and by who.
There’s no doubt that LLM’s and other generative models are disruptive, but suggesting that they are inherently harmful assumes that the things and systems they are disrupting aren’t themselves harmful.
Most of what you’re pointing to as harm caused by AI is far more attributable to the systems it exists in (including and especially capitalism) and not the models themselves. The only issue that I can see with AI inherently is its energy demand - but if we’re looking at energy consumption broadly then we’d be forced to look at the energy consumption of capitalism and consumerism under capitalism, too.
I imagine the sentiment here would be wildly different if we were scrutinizing the energy demand of gaming on a modern GPU.
“We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. Guns don’t kill people, people do. But some philosophers have argued that technology can have values built into it that we may not realize.”
Sure, but Abigail wasn’t really advocating against transhumanism or technology generally… The critique of that video is that technology isn’t really the focus of the disagreement between transhuminism and anti-transhumanism, but rather the ‘dressing’ around a deeper phenomenological belief (for transhumanists it’s the belief that technology will save us from the inequity and suffering created under capitalism, and for anti-transhumanists it’s the belief that technology and progress will subvert the ‘natural’ order of things and we must reject it in favor of tradition). Both arguments distract from what is arguably the more pressing issue - namely that technology does nothing to correct the contradictions of capital and it may even work to accelerate its collapse.
I would really enjoy a discussion about how AI might shape our experience as humans - and how that might be good or bad depending - but instead we’re stuck in this other conversation about how AI might save us from the toils of labor (despite centuries of technological progress having never brought us any closer to liberation) vs how it might be a Trojan horse and we need to return to a pre-AI existence.
It might be more productive for you to argue the case for why the effects or harm you’re pointing to are somehow ‘inherent’ to AI itself and not symptoms of capitalism exacerbated by AI.