Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Regarding a project to translate several thousand ancient letters:
So, um… this is bad. Really bad. I looked at the letters that were translated by the AI, and the very first one I found was almost entirely hallucination.
And here I thought “people being easy to replace with a small shell script” was a joke…

fig. 1: @self hard at work keeping awful systems up and running

from Unix World 1985. enterprise computing was so much more fun in those days
Hey, we hailed our @self!
Man, if only I had enough optimism left to aspire to that level of silliness, as opposed to be sliding further and further in to the maw of computer stupidity.
Thank you @self, love your haircut
if you use Rust enough it just grows like that
A LWer is super-impressed by the time travel fantasy Illumine Lingao (an example of Chuanyue)
https://www.lesswrong.com/posts/YiRsCfkJ2ERGpRpen/leogao-s-shortform?commentId=J4YGrY26Ezt5oMsot
Listen to this pitch:
the vast majority of the book is devoted to discussing every single technical aspect in excruciating well-researched detail. you don’t simply have a paragraph about them deciding to buy guns, you get an entire chapter of different gun experts arguing back and forth about exactly which gun to buy based on maintainability, range, differences between civilian and military models, semi automatic vs fully automatic.
Apparently they’re quite unaware of the extensive number of works in Russian with similar themes:
https://en.wikipedia.org/wiki/Accidental_travel#In_Russian_fiction
Look, I’ve read some long-ass web novels. I enjoyed Worm, A Practical Guide to Evil, and Katalepsis all start to finish. I have also spent more hours than I could count (even if I did care to) perusing excessively detailed fan wikis and reading interninal debates between nerds about minutia. I have done all of this and enjoyed myself greatly.
But the way they’re describing this sounds absolutely exhausting and incredibly dull. If this isn’t the result of some kind of collaborative project where the debates are between different actual people then it sounds like you’re just dumping your worldbuilding notes into the page and throwing a “he said” every so often.
followup, here’s a real substack interview with one of the originators of the collab novel
https://afraw.substack.com/p/first-dig-the-latrines
to be honest sounds like semi-fascist shit to me.
deleted by creator
An Aella-curious blogger in SoCal has noticed something:
But what I find more interesting than broadly “weird sex” is the specific interest in BDSM, kink and particularly full-contact CNC; a relatively common fantasy in individuals, but one I’ve never seen such widespread community interest in outside the Bay Area.
Kink and power-play are practices of manufactured risk, with CNC clocking at a more intense point on the same spectrum. The idea that many of these people are devoting their 9-5s and beyond to eliminating the ultimate consequence (death), only to go home and collectively play-pretend violence (scaffolded with extensive rules and consent forms) is fascinating, and- to me- makes complete sense.
The rationalist interest in manufacturing risk is the direct byproduct of their commitment to flushing it out.
The blogger attended Aella’s SlutCon. I don’t know if she knows that many of our friends have problems with consent as most of us understand it (their understanding is more “if they are old enough to sign the contract, and they sign, that is on them”).
[Effective Altruism] was originally applied to initiatives like raising money for mosquito netting, but now includes figures like Johnson, who has reframed his blood experiments as a product of his own generosity, set to cure humanity of its greatest ill: death itself.
People keep saying this, so it’s good to have a reminder that the weirdos (derogatory) were there all along.
Gleiberman’s paper on the longtermist foundations of the Effective Altruism movement is great!
I read a post by someone leaving LessWrong-the-site who said that from now on he would only donate to Aubrey de Grey because obviously we are so close to curing aging. Found it http://lesswrong.com/lw/m81/leaving_lesswrong_for_a_more_rational_life/
I think that after it is all said and done and after all the money Bryan Johnson spends to live forever, I think the end result will be: exercise, good diet, no alcohol, no tobacco, no drugs. He will still be pushing his product, but the basic advice will be what we already know.
The end result is that he will die, just like every other human being ever.
But along the way, he found a way to take estrogen wrong
WTF is this garbage in the Graurdain? “Let’s assume!” is a terrible premise for even an opinion column to begin with, but “let’s assume Musk is right and AI could allow us all to not work” is… bananas for the Guardian to publish. Even before considering that the author’s bio says he’s a business owner of a technology and financial management services company.
The article’s entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago? Honestly, I am incensed that people like Musk and Trump can just say shit and many people will just accept it. I can no longer tolerate it.
Putting aside the very real human ability to screw up such a concept and turn any fair system into an unfair one, …
He says this after mentioning UBI. He really doesn’t want to confront the unfortunate fact that UBI is entirely a political issue. Whatever magical beliefs one may have about how AI can create wealth, the question of how to distribute it is a social arrangement. What exactly stops the wealthy from consolidating all that wealth for themselves? The goodness of their hearts? Or is it political pushback (and violence in the bad old days), as demonstrated in every single example we have in history?
I’d say the problem is even worse now. In previous eras, some wealthy people funded libraries and parks. Nowadays we see them donate to weirdo rationalist nonsense that is completely disconnected from reality.
No getting up early and commuting on public transit. …
This is followed by four whole paragraphs about how the office sucks and wouldn’t it be wonderful if AI got rid of all that. Guess what, we have remote work already! Remember how, during COVID, many software engineering jobs went fully remote, and it turned out that the work was perfectly doable and the workers’ lives improved? But then there were so many puff pieces by managers about the wonderful environment of the office, and back to the office they went. Don’t worry, when the magical AI is here, they’ll change their minds.
Yes, there are “mindless, stupid, inane things” like chores that are unavoidable. There are also other mindless, stupid, inane things that are entirely avoidable but exist anyway because some people base their entire lives around number go up.
This was my thought the whole time: if the political will existed, we could probably already do everything that AI is supposed to “enable” here. Some of the work people would choose not to do would end up being actually important, and the market in its infinite power would need to find a way to get that work done, whether that’s paying more to invent new types of automation or compensating people enough that they choose to do it without the threat of starvation and homelessness (or finding new ways to exploit people to do it, but I believe there’s a floor on that at which the other two options become more economically viable), but that’s the whole pitch for having a labor market in the first place. At the same time, absent that political will there’s no reason to expect any change in productivity to change the current arrangement. At best the people working any jobs that get eliminated are discarded as obsolete, lose their ability to participate in the market, and are eventually handled by the criminal justice system or otherwise removed from consideration.
The article’s entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago?
This isnt the only thing, the man made so many promises that were lies, or didnt work out it is almost amazing people give him the benefit of the doubt. But people have to or the economy might crash (which seems more and more inevitable now, as a fantasist related crash cant be avoided (and it is worse, if you have seen Andreessen latest weird interview it is clear Trump and Musk are not the only mental voids with a lot of money, so they might be all like that)).
The Blindsight vampires are here already.
@Soyweiser The irony is that if Musk was serious about landing a man on Mars by 2022, he had Falcon Heavy flying in 2017 and Crew Dragon flying with crew in 2020. The amount he’s spent on Starship would have covered several fully-expended FH launches to Mars transfer orbit and development of a long duration crew module. We know how to soft-land ~1-2 tons on Mars.
… What, you wanted him to bring the astronauts *back* afterwards? Are you some kind of Commie?
(But my point stands.)
supremely rational gamblers want to rewrite reality by threatening a journalist, because reporting got in the way of them getting money from polymarket. all while completely unaware that they’re giving him better story than the actual missile impact thing https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if-i-dont-rewrite-an-iran-missile-story/ also https://awful.systems/post/7617781
update: polymarket claims to have banned users involved, not specifying how they found them https://xcancel.com/Polymarket/status/2033635318662860916#m
Polymarket when faced with the oracle problem: “What if we threaten to shoot the oracle?”
the rationalist counterpart to rubber hose cryptanalysis
e: damn it was right there: rubber hose ontology
“So I put an accumulator on Gaelic Warrior to win the Gold Cup, Arsenal to beat Leverkusen in the Champions League, and all-out nuclear conflict by the end of March”
new odium symposium episode. we examine the foundational TERF text, janice raymond’s “the transsexual empire,” which turns out to be about how trans people are a big pharma conspiracy
https://www.patreon.com/posts/12-invasion-of-w-152915964
www.odiumsymposium.com for links to other platforms
It always strikes me how stupid bigotry makes you. There are so many points where she seems to make a point that she cannot accept because it would go against her conclusion. Also lol @ the “not what we’re called”, that stuck with me for some reason
i hope you get hazard pay for all the psychic damage you inflict on each other
our pay is our satisfaction in having inflicted it on others as well
in all honesty we would love it if doing this were our job but there is no pathway to that that we can see. we just do it b/c it’s really fun
My understanding is that most professional podcasters start off more or less like this, start getting a Patreon or some light sponsors going in order to fund actually decent equipment, and then look at the numbers one morning and realize that actually they could just do this for a living.
Did I mention that one of your more recent eps covered some shit so odious I stress ate a pile of oreos? Keep up the good work

alt-txt
Yesterday i explained something so bleak to my therapist she asked me if we could pause for a minute so she could think about it. I’m getting close to winning therapy i can feel it in my bones.
BTW, in markdown you can put alt text in the image link and renders will put it into the image tag.
Nifty, thanks!
The Founder of Anthropic Says He Wants to Protect Humanity From AI. Just Don’t Ask How. another long article about the AI craze and in particular Anthropic. A snippet that stood out to me:
"Reviewing my interview transcripts one night, I discover I’d left my recorder running when I excused myself to use the bathroom at Anthropic. On the tape, Kyle Fish, the AI researcher, and Danielle Ghiglieri, my tattooed guide, are laughing about some visitors to their headquarters the day before, what sounds like a documentary or TV crew.
“I sit right next to Trenton,” Fish says. “I went back and told him, ‘Dude, you really did something to those guys with your sunscreen stuff yesterday.’ He thought it was hilarious.”
They’re both cracking up.
Ghiglieri says Fish, too, had convincingly come off as a “different species of human,” adding: “They were very enamored with you.”
They’re inclined to cooperate with whatever project these people proposed, she says, and make everybody a star. I hadn’t heard Trenton’s sunscreen spiel yet. Only later, over lunch, would he tell me that he stopped protecting himself against skin cancer because AI was going to end the world in five years.
Crazy to me how people can so confidently predict AI doomsday, and then just keep working at an AI company
I’m more concerned that the writer could listen to this, presumably multiple times on his tape, and still wrote the rest of the piece like these guys are acting in good faith. Regardless of the unanswerable question of whether they believe their own hype, they are clearly saying things for a purpose of self-enrichment and self-aggrandizement rather than out of any concern for other people, and that is where the story should be. Even the guys most ostensibly interested in protecting humanity are still, when they think the mic is off and the journalist is out of the room, joking about how they’re manipulating the press into saying what they want.
I think it’s a specific genre of reportage where you objectively[1] report what you observe and let the reader draw their own conclusions.
[1] problematic term, engage!
Reading the article again, that definitely feels like the angle the author was going for
I will confess that my initial reaction was from a partial reading since I got derailed ranting about the silicon valley attitude towards neurodivergence and how much damage it’s doing to us, and basically right after that bit it starts taking a much more (appropriately imo) cynical tone that was honestly refreshing.
Let this be a lesson to those of us who must learn, I guess.
I mean there is a lot of crazy bullshit in there so I don’t blame anyone for getting derailed
NVidia’s announced an AI filter for PC gaming, calling it “AI-Powered Breakthrough In Visual Fidelity For Games” and hyping the ever-loving shit out of it.
The results are, unsurprisingly, complete garbage, and its already getting ripped apart by the gaming press.
I mean, some of their before/after images are much more impressive than the RE one, but the general look is less like a revolution in capacity and more like someone took some time to find the right Instagram filter.
Also after taking a look at Starfield’s steam page for comparison I’m pretty sure that all the “before” images were taken on lower settings for existing texture quality and lighting. Like, even in areas where the DLSS gives an improvement the original game doesn’t look as bad as presented here.
Also the discourse has been ongoing since at least Skyrim’s original release whether or not the increasing fidelity of game graphics was actually making games better, or just more expensive to make and play. And that was before transformer models entered the picture and started cooking the world. I’m glad nVidia got some new jerk-off material, but even if it works exactly as advertised that’s all it is at this point.
I’m struck by how much contrast gets blasted into the shadows of every scene, reminiscent of the average RTX “remaster.” Lighting is treated not as a tool for composing scenes and guiding attention, but as a dial to be turned toward “more gooder” wherever possible. Just make everything look like everything else; that’s how you know the technology is getting Better.
increasing fidelity of game graphics was actually making games better, or just more expensive
I really liked what Control did with cranking up the verisimilitude and the photorealism, namely to accentuate the uncanniness and really up the new weird vibe.
Maybe it’s just me but even the enhanced lighting aspect doesn’t look especially good, at least where faces are concerned; shining a hard light sideways so every facial nook and cranny gets highlighted in excruciating detail looks less natural and more like the old android HDR photo filter, even before you realize it’s giving some characters instagram make-overs.
I’m partway through the Gamers Nexus video clowning on the whole thing, and I kind of feel like I need to find the recording of the actual GDC presentation to pick it apart.
There’s a clip they use around the 17 minute mark where Jensen talks about how they combined structured data and generative AI. It’s just so wrong on so many levels that I feel like it deserves its own dedicated fucking sneer post. It’s some of the slimiest marketing word play leading into just blatantly false claims.
The fact that the slide with Palantir’s logo flanked by hearts didn’t result in even audible boos makes me very sad.
5 Tools You Can Vibe Code For Your Business In Under An Hour exactly the sort of slop from someone with a hard-on for AI, no understanding of the risks of vibe coding core parts of your business’ infrastructure and guest writes for Forbes would produce.
Starts with a sickening intro that leans into “pilled” to be “down with the kids”
If you haven’t joined the Claudepilled crowd, open an account and play.
Bright ideas include “copy and paste the source code from your home page into Claude” but overlooks the how to actually get those changes deployed part.
Wanna see my cool website. It’s at
http://localhost:1234/take that web developers!Then she describes building a custom internal dashboard…
Open Claude Code and describe your business. List every software tool you use. Ask it to suggest the key metrics you’d want to see from each one. Go back and forth until the list feels right. Then give it your brand guidelines and ask it to build a dashboard that displays everything. Ask for it to be password protected.
Yes that sounds like a great idea and not a car crash waiting to happen
She also describes building a customer facing onboarding site
Build a custom client-facing dashboard instead. Tell Claude Code what your onboarding process looks like step by step. Describe what information you need to collect and what your clients need to access. Ask it to build a secure portal they can log into, with automations that send them what they need and follow up to collect what you need. This is a branded, professional experience that scales without you. The emotional design matters here too: you want clients to feel held, not herded. Tell Claude that.
Yes vibe coded customer facing tools are a fantastic idea and definitely not a vector for cyber attacks nuh-uh. I’m sure it will be fine if you ask for it to be “secure” right?
FML are we in the twilight zone here?
guest writes for Forbes would produce.
I seriously think we can completely dismiss Forbes as a credible source at this point, even if it’s not something coming from, ahem, “contributors”
Ask for it to be password protected.
I think I’m having a stroke. Or at least I hope I’m having a stroke and that this unparodiably dumb piece isn’t any more real than it sounds.
The software industry is experiencing a huge collective AI psychosis.
New AI legal filing sanctions just dropped: https://storage.courtlistener.com/recap/gov.uscourts.ca6.152857/gov.uscourts.ca6.152857.50.2.pdf
I don’t have time to read over it completely yet, but here’s a taste:
That briefing repeatedly misrepresented the record, cited non-existent cases, and cited cases for propositions of law that they did not even discuss, much less support. As explained below, Irion’s and Egli’s misconduct warrants the sanctions laid out in Section II.C.
If we included typos and other errors that are arguably, but not clearly, a misrepresentation or fake citation, we would be looking at far more misstatements of fact and law
Irion and Egli did not respond to these directives. Instead, they said the show cause order was “void on its face for failing to include a signature of an Article III judge,” was “motivated by harassment of the Respondent attorneys,” and “reflect[ed] illegal ex-parte [sic] communications within this Court.”
Although citing fake cases violates Federal Rule of Appellate Procedure 38, Rule 38 alone is not “up to the task” of sanctioning this conduct, Chambers, 501 U.S. at 50, because Rule 38 allows only for the imposition of costs and attorneys’ fees, Sanctions § 33. But we think other sanctions are also appropriate, so we employ our inherent authority
Not a lawyer, just a bit of a law nerd, by this is a big deal, especially the fact that courts have been repeatedly using their inherent authority sanction on people who fuck this up. Courts do not routinely invoke their inherent authority like this. Also this footnote is interesting:
Ghostwriting is when one person writes the document while another person takes credit for it without acknowledging the true author’s identity. See The American Heritage Dictionary of the English Language 741 (4th ed. 2000). Legal authorities generally discuss ghostwriting for a pro se litigant, see, e.g., Duran v. Carris, 238 F.3d 1268, 1272 (10th Cir. 2001), but we see no reason why rules regulating ghostwriting should apply in only the pro se context. The primary concern with ghostwriting is that the true author would escape liability for his conduct, see In re Mungo, 305 B.R. 762, 768 (Bankr. D.S.C. 2003); Ellis v. Maine, 448 F.2d 1325, 1328 (1st Cir. 1971), and that concern is just as acute when a lawyer ultimately signs the ghostwritten pleading.
It sounds like they’re looking for an angle to hold the LLM operators (OpenAI/Anthropic - or at least whatever company wraps the models in the necessary bits and bobs to make it a product they can sell to stupid asshole lawyers) as ultimately accountable for these filings, just as if they were a SovCit guru providing materials for one of their griftees to submit to the court without ever actually putting their name to the record where the might face consequences. I’d need to do some research to speculate on what that might mean, but it should give everyone operating in this space pause.
I’m still reading the appendix that goes into the specific hallucinations but it sounds like they’re pretty absurd based on the tone of this order.
• On pages 17 and 19, Whiting cites “T.C.A. § 29-12-119,” but we cannot find a section 29-12-119 in the Tennessee Code Annotated
lol. lmao.
On page 4, Whiting states “it is well settled that the First Amendment does not protect speech that knowingly asserts false statements of fact. United States v. Alverez, 567 U.S. 709, 721 (2012).” Alvarez states the opposite: “This opinion . . . rejects the notion that false speech should be in a general category that is presumptively unprotected.” Id. at 721–22 (plurality opinion).
Oh. Oh no.
• On page 1, Whiting states, “This Court has made clear that , [sic] ‘[T]he mere fact that a plaintiff did not prevail does not mean that the claim was frivolous.’ Adcock-Ladd v. Secretary of the Treasury, 227 F.3d 343, 350 (6th Cir. 2000).” Adcock-Ladd does not contain the quoted language, and it is not about frivolous cases.
This specific confabulation appears at least 5 times. I’m not sure if Whiting was copy/pasting from something ChatGPT spat out or if ChatGPT was at least consistently inventing the same bullshit.
Looking for a bit of context I found this local news piece and it certainly reads like the guy is a crank who kick-started this whole thing by trying to protest the crime of public safety during a global pandemic.
I’m pretty sure the 2 people cosplaying as lawyers are just as bugshit as he is.
edit yeah they’re SovCits
Finally, our orders are not invalid simply because the clerk signed them. We have already told Irion and Egli that our orders are not void when the clerk signs them in this very case. Whiting v. City of Athens, No. 24-5886, 2025 U.S. App. LEXIS 13507, at *1 (6th Cir. June 2, 2025). And the Supreme Court has twice denied petitions for mandamus from Irion and Egli demanding that the clerk stop signing our orders.
(italics in original, bold my emphasis)
God, I love when people think “because I said so” is adequate back up of their BS.
“Judges love this one weird trick!”
Back and forth a few years ago on the SlateStarCodex subredit, roughly:
Scott Alexander: Bay Area rationality is wonderful, we have foundations and group homes and jolly social activities and a Solistice ritual and even “Reciprocity and Propinquity: two different rationalist dating/matchmaking services”
Rando:
I don’t know, I live in a nice community in a different city where people I know have lots of Shabbat dinners, choirs, board game nights, discussions, etc. And zero people I know have joined a cult, and one person I know has developed psychosis, but she had a family history of psychosis, starting having symptoms in early adulthood, and pretty quickly went on antipsychotics and got a lot better.
Is it just that California attracts weird shit and if you put people in California, whatever they’re already doing will get culty?
Alexander: base rates! how do your demographics compare to ours?
Rando:
Probably similar size and age? Nearly everyone I knew has parents who are teachers/lawyers/doctors/therapists/etc, so I guess upper middle class according to that book you wrote about a while ago.
It’s not like everyone’s doing great, lots of people have depression and anxiety and probably smoke more weed than is good for them. Most of those people already had those problems from their adolescence.
But our rates of weird problems, like multiple people with overlapping psychoses tied to some guy, are low.
Suppose you’re a college grad who has to decide between the usual unpaid internships at dumb startups vs. getting to be a ‘research fellow’ for a group that says it’s going to solve philosophy and save the world, and the only catch is that the group is actually a cult. Still seems pretty tempting honestly.
Oh?
a group that says it’s going to solve philosophy
This must hit really hard if you are Wittgenstein
My first degree was a professional degree, so after college I went out and got a paid job doing that, using the experience I had developed in paid summer jobs. Even when I was young I think I would have said no to Leverage Research.
I mean, giving inflated titles and grandiose plans is part of the sales pitch. Y’know, for the cult.
Like, I think there’s a fundamental misunderstanding here. The problem isn’t that the people who want to be cult leaders are able to attract a lot of people who are preinclined to be cult followers and those people suffer the associated psychic damages. It’s that even the less culty parts of the rationalist subculture seem to produce a weirdly high number of wannabe cult leaders, even if they don’t conceptualize themselves that way.
AI seems good at purple prose and metaphors that don’t exactly make sense. No, I do not give a fuck about the “triangle of calm” when it comes to, of all things, the narrator taking off her shoes. No, I am not interested in how long the narrator sets the timer on the microwave when she makes literally the blandest meal of all time.
Now I’m sure the techbros truly think this is good “literary” writing. After all, they only care that the writing sounds flowery, because they seem to be very good at missing the actual meaning of everything. I remember Saltman saying that the movie Oppenheimer needed to be more optimistic to inspire more kids to become physicists (while also saying that The Social Network did that for startup founders).
All I could think about is who has a microwave that beeps while it’s still cooking?
maybe it’s carbon monoxide detector going off, it would make more sense
Mine does if I use the defrost setting. I assume it wants me to rearrange the contents, but when it beeps the contents are still one solid chunk of ice. It doesn’t make sense, especially for a device that claims to have a “smart” sensor.
It’s a bit like the excerpt. It feels like someone is trying to rewrite the American Psycho routine, but it hammers the obsessive compulsive tropes with all the subtlety of a brick to the face while simultaneously lacking an overall purpose. It’s just noise.
I had the thought, that maybe the author could be intentionally trying to be mind-numbingly boring, but that just killed it. Into the slop jail!
I mean maybe it’s poorly worded and there’s only one set of beeps at the end. But then why would the protagonist be reminded multiple times?
Unless she’s remembering all the times in the past that microwaving bland chicken reminded her of the world being orderly?
But now I think I’m thinking too deeply about microwaves.
In other news, Cade Metz’ latest piece is actually pretty critical, especially by NYT standards, but you wouldn’t know it from the headline.
“A.I. Agents: They’re Fun. They’re Useful. But Don’t Give Them the Credit Card.”
















