Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
today in capitalism: landlords are using an AI tool to collude and keep rent artificially high
But according to the U.S. government’s case, YieldStar’s algorithm can drive landlords to collude in setting artificial rates based on competitively-sensitive information, such as signed leases, renewal offers, rental applications, and future occupancy.
One of the main developers of the software used by YieldStar told ProPublica that landlords had “too much empathy” compared to the algorithmic pricing software.
“The beauty of YieldStar is that it pushes you to go places that you wouldn’t have gone if you weren’t using it,” said a director at a U.S. property management company in a testimonial video on RealPage’s website that has since disappeared.
But they hashtag care!
I mean, yes. Obviously if all the data from these supposedly competing rental owners was being compiled by Some Guy this would be collusion, price gouging, etc.
But what if instead of Some Guy we used a computer? Eh? Eh? Pretty smart, yeah?
Where’s the guy who said AI use is a form of austerity
Yeah where is he? I would like to bludgeon him
Angrily puts away bludgeon
every popular scam eventually gets its Oprah moment, and now AI’s joining the same prestigious ranks as faith healing and A Million Little Pieces:
Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”
and it’s got everything you love! veiled threats to your job if the AI “revolution” does or doesn’t get its way!
As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.”
woe is Sam, nobody understands the incredible stress he’s under marketing the scam that’s making him rich as simultaneously incredibly dangerous but also absolutely essential
fuck I cannot wait for my mom to call me and regurgitate Sam’s words on “how AI works” and ask, panicked, if I’m fired or working for OpenAI or a cyborg yet
I’m truly surprised they didn’t cart Yud out for this shit
Yud would jump on a couch.
working for OpenAI
You probably are, if not by choice.
unironically part of why I am so fucking mad that reCaptcha ever became as big as it did. the various ways entities like cloudflare and google have forcefully inserted themselves into humanity’s daily lives, acting as rent-extracting bridgetroll with heavy “Or Else” clubs, incenses me to a degree that can leave me speechless
in this particular case, because reCaptcha is effectively outsourced dataset labelling, with the labeller (you, the end user, having to click through the stupid shit) not being paid. and they’ll charge high-count users for the privilege. it is so, so fucking insulting and abusive.
I always half-ass my captcha and try to pass in as many false answers as possible, because I’m a
rebelcunt.
I’m truly surprised they didn’t cart Yud out for this shit
Self-proclaimed sexual sadist Yud is probably a sex scandal time bomb and really not ready for prime time. Plus it’s not like he has anything of substance to add on top of Saltman’s alarmist bullshit, so it would just be reminding people how weird in a bad way people in this subculture tend to be.
that’s a very good point. now I’m wondering if not inviting Yud was a savvy move on Oprah’s part or if it was something Altman and the other money behind this TV special insisted on. given how crafted the guest list for this thing is, I’m leaning toward the latter
I think if you want to promote something you don’t invite the longwinded nerdy person. Don’t think a verbal blog post would do well on tv. I mean, I would also suck horribly if I was on tv, and would prob help make the subject im arguing for less popular.
Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”
christ
billy g’s been going for years with bad takes on those three things (to the point that the gates foundation have actually been a problem, gatekeeping financing unless recipients acquiesce to using those funds the way the foundation wants it to be used (yeah, aid funds with instructions and limitations…)), but now there can be “AI” to assist with the issue
maybe the “revolution” can help by paying the people that are currently doing dataset curation for them a living wage? I’m sure that’s what billy g meant, right? right?
Look, where are you going to get your experts if you can’t trust Jeffrey Epstein’s Rolodex?
I mean, I have glasses and wear a wristwatch so by the standard definitions I qualify as a cyborg in all aspects but the aesthetic.
No wristwatch, but I have glasses and without electricity I stop breathing. (While asleep.)
So, yeah, cyborg.
I didn’t think she could top John of God, but here we are.
BTW 9th of September is not a Sunday lol
I wasn’t sure so I asked chatgpt. The results will shock you! Source
Image description
Image that looks like a normal chatgpt prompt.
Question: Is 9 september a sunday?
Answer: I’m terribly sorry to say this, but it turns out V0ldek is actually wrong. It is a sunday.
(I had no idea there were sites which allowed you to fake chatgpt conversations already btw, not that im shocked).
This is barely on topic, but I’ve found a spambot in the wild. I know they’re a dime a dozen, but I wanted to take a deep dive.
https://www.reddit.com/user/ChiaPlotting/
It blew its load advertising a resume generator or something bullshit across hundreds of subs. Here’s an example post. The account had a decent amount of karma, that stood out to me. I’m pretty old school, so I thought someone just sold their account. Right? Wrong. All the posts are ChatGPT generated! Read in sequence, all the karma farm posts are very clearly AI generated, but individually they’re enticing enough that they get a decent amount of engagement: “How I eliminated my dent with the snowball method”, “What do you guys think of recent Canadian immigration 🤨” both paraphrased.
This guy isn’t anonymous, and he seemingly isn’t profiting off the script that he’s hawking. His reddit account leads to his github leads to his LinkedIn which mentions his recent graduation and his status as the co-founder of some blockchain bullshit. I have no interest in canceling or doxxing him, I just wanted to know what type of person would create this kind of junk.
The generator in question, that this man may have unknowingly destroyed his reddit account to advertise, is under the MIT license. It makes you wonder WHY he went to all this trouble.
I want to clone his repo and sniff around for data theft; the repo is 100% percent python, so unless he owns any of the modules being imported the chance of code obfuscation is low. But after seeing his LinkedIn I don’t think this guy’s trying to spread malware; I think he took a big, low fiber shit aaaaalll over reddit as an earnest attempt at a resume builder.
Personally, I find that so much stranger than malice. 🤷♂️
the username makes me think the account started its life shilling for the chia cryptocurrency (the one that spiked storage prices for a while cause it relied on wearing out massive numbers of SSDs, before its own price fell so low people gave up on it), but I don’t know how to see an account’s oldest posts without going in through the defunct API
Maybe hot take, but when I see young people (recent graduation) doing questionable things in pursuit of attention and a career, I cut them some slack.
Like it’s hard for me to be critical for someone starting off making it in, um, gestures about this, world today. Besides, they’ll get the sense knocked into them through pain and tears soon enough.
I don’t find it strange or malice, I find it as symptom of why it was easier for us to find honest work then, and harder for them now.
I don’t know man, there are plenty of jobs that don’t involve any of whatever that is, like line cook or caregiver or going on disability.
Also he’s a programmer? You can find a Python job that isn’t, you know, this bullshit.
goddammit you got to it eight seconds before me
Read the original Yudkowsky. Please. FOR THE LOVE OF GOD.
Dunno what’s worse, that he’s thirstily comparing his shitty writing to someone famous, or that that someone is fucking Hayek.
Knowing who he follows the unclear point of Hayek was probably “is slavery ok actually”
I suspect that for every subject that Yud has bloviated about, one is better served by reading the original author that Yud is either paraphrasing badly (e.g., Jaynes) or lazily dismissing with third-hand hearsay (e.g., Bohr).
Even he thinks you shouldn’t read HPMOR.
Thinking back to when “the original Yudkowsky” needs a content warning for sexual assault.
I think HPMOR also still needs a content warning for talking about sexual assault. Weird how that is a pattern.
OK, so, Yud poured a lot of himself into writing HPMoR. It took time, he obviously believed he was doing something important — and he was writing autobiography, in big ways and small. This leads me to wonder: Has he said anything about Rowling, you know, turning out to be a garbage human?
A quick xcancel search (which is about all the effort I am willing to expend on this at the moment) found nothing relevant, but it did turn up this from Yud in 2018:
HPMOR’s detractors don’t understand that books can be good in different ways; let’s not mirror their mistake.
Yea verily, the book understander has logged on.
Another thing I turned up and that I need to post here so I can close that browser tab and expunge the stain from my being: Yud’s advice about awesome characters.
I find that fiction writing in general is easier for me when the characters I’m working with are awesome.
The important thing for any writer is to never challenge oneself. The Path of Least Resistance™!
The most important lesson I learned from reading Shinji and Warhammer 40K
What is the superlative of “read a second book”?
Awesome characters are just more fun to write about, more fun to read, and you’re rarely at a loss to figure out how they can react in a story-suitable way to any situation you throw at them.
“My imagination has not yet descended.”
Let’s say the cognitive skill you intend to convey to your readers (you’re going to put the readers through vicarious experiences that make them stronger, right? no? why are you bothering to write?)
In college, I wrote a sonnet to a young woman in the afternoon and joined her in a threesome that night.
You’ve set yourself up to start with a weaksauce non-awesome character. Your premise requires that she be weak, and break down and cry.
“Can’t I show her developing into someone who isn’t weak?" No, because I stopped reading on the first page. You haven’t given me anyone I want to sympathize with, and unless I have some special reason to trust you, I don’t know she’s going to be awesome later.
Holding fast through the pain induced by the rank superficiality, we might just find a lesson here. Many fans of Harry Potter have had to cope, in their own personal ways, with the stories aging badly or becoming difficult to enjoy. But nothing that Rowling does can perturb Yudkowsky, because he held the stories in contempt all along.
This holiday season, treat your loved ones to the complete printed set* of the original Yudkowsky for the low introductory price of $1,299.99. And if you act now, you’ll also get 50% off your subscription to the exciting new upcoming Yudkowsky, only $149 per quarter!
*This fantastic deal made possible by our friends at Amazon Print-on-Demand. Don’t worry, they’re completely separate from the thoughtless civilization-killers in the AWS and AI departments whom we have taught you to fear and loathe
(how far are we from this actually happening?)
This reminded me, tangentially, of how there used to be two bookstores in Cambridge, MA that both offered in-house print-on-demand. But apparently the machines were hard to maintain, and when the manufacturer went out of business, there was no way to keep them going. I’d used them for some projects, like making my own copies of my PhD thesis. For my most recent effort, a lightly revised edition of Calculus Made Easy, I just went with Lulu.
I remember those machines (in general)!
yuh it’s basically the stuff Kindle Print or Lulu or Ingram use. (Dunno if they still do, but in the UK Amazon just used Ingram.)
Cheap hack: put your book on Amazon at a swingeing price, order one (1) author copy at cost
NASB is there an xcancel but for medium dot com?
archive.today usually works
@fasterandworse Scribe.rip also works
brilliant! thanks!
saw you already got two answers, another answer: medium’s stupid popover blocker is based on a counter value in a cookie that you could can blow up yourself (or get around with instance windows)
I am a very big fan of the Fx Temporary Containers extension
I didn’t even know about the temporary containers extension. that’ll be very useful for so much stuff. Thanks as well!
yeah for some reason it’s not very well known, which is why I tell people about it. I’m 90% done with my months-ago-promised browser post, and should have it up soon
couple last-minute irks came up recently as I was doing some stuff, so now I’m trying to figure out whether those have answers or not…
that didn’t take long https://blog.kagi.com/announcing-assistant
my favourite thing about kagi is how when you click on the kagi logo on the kagi.com home page you get a 404
nice
I knew Kagi was kinda screwed the moment the CEO went off like Castle Bravo, but jeez
can be activated by appending ? to the end of your searches
what a wonderfully clever interface that absolutely won’t go wrong in any number of situations at least 5~10 of which I cannot think of right now
siiiiiiiiiigh
oh hey, we’re back to “deepmind models dreamed up some totally novel structures!”, but proteins this time! news!
do we want to start a betting pool for how long it’ll take 'em to walk this back too?
it’s weird how they’re pumping this specific bullshit out now that a common talking point is “well you can’t say you hate AI, because the non-generative bits do actually useful things like protein folding”, as if any of us were the ones who chose to market this shit as AI, and also as if previous AI booms weren’t absolutely fucking turgid with grifts too
given the semi-known depth of google-lawyer-layering, I suspect this presser got put together a few weeks prior
not that I’m gonna miss an opportunity to enjoy it landing when it does, mind you
I suspect it’s a bit of a tell that upcoming hype cycles will be focused on biotech. Not that any of these people writing checks have any more of a clue about biotech than they do about computers.
sounds to me a bit like crypto gaming, as in techbros trying to insert themselves as middlemen in a place that already has money, because they realized that they can’t turn profit on their own
That was the hype cycle before crypto - you’ll see companies that pivoted from biotech to crypto to AI.
Haven’t read the whole thing but I do chuckle at this part from the synopsis of the white paper:
[…] Our results suggest that AlphaProteo can generate binders “ready-to-use” for many research applications using only one round of medium-throughput screening and no further optimization.
And a corresponding anti-sneer from Yud (xcancel.com):
@ESYudkowsky: DeepMind just published AlphaProteo for de novo design of binding proteins. As a reminder, I called this in 2004. And fools said, and still said quite recently, that DM’s reported oneshot designs would be impossible even to a superintelligence without many testing iterations.
Now medium-throughput is not a commonly defined term, but it’s what DeepMind seems to call 96-well testing, which wikipedia just calls the smallest size of high-throughput screening—but I guess that sounds less impressive in a synopsis.
Which as I understand it basically boils down to “Hundreds of tests! But Once!”.
Does 100 count as one or many iterations?
Also was all of this not guided by the researchers and not from-first-principles-analyzing-only-3-frames-of-the-video-of-a-falling-apple-and-deducing-the-whole-of-physics path so espoused by Yud?
Also does the paper not claim success for 7 proteins and failure for 1, making it maybe a tad early for claiming I-told-you-so?
Also real-life-complexity-of-myriads-and-myriads-of-protein-and-unforeseen-interactions?As a reminder, I called this in 2004.
that sound you hear is me pressing X to doubt
Yud in the replies:
The essence of valid futurism is to only make easy calls, not hard ones. It ends up sounding prescient because most can’t make the easy calls either.
“I am so Alpha that the rest of you do not even qualify as Epsilon-Minus Semi-Morons”
i suspect - i don’t know, but suspect - that it’s really leveraging all known protein structures ingested by google and it’s cribbing bits from what is known, like alphafold does to a degree. i’m not sure how similar are these proteins to something else, or if known interacting proteins have been sequences and/or have had their xrds taken, or if there are many antibodies with known sequences that alphaproteo can crib from, but some of these target proteins have these. actual biologist would have to weigh in. i understand that they make up to 96 candidate proteins, then they test it, but most of the time less and sometimes down to a few, which suggests there are some constraints. (yes this counts as one iteration, they’re just taking low tens to 96 shots at it.) is google running out of compute? also, they’re using real life xrd structures of target proteins, which means that 1. they’re not using alphafold to get these initial target structures, and 2. this is a mildly serious limitation for any new target. and yeah if you’re wondering there are antibodies against that one failed target, and more than one, and not only just as research tools but as approved pharmaceuticals
i’m tired boss
wait that’s just antibodies with extra steps
living things literally are just fuzzing it until something sticks and it works
but but proteins! surely they’ve got it right this time! /s
(I wondered what you’d say when I saw this. I can only imagine how exhausting)
i’m not done with the last one, i’ve already collected some footnotes but not enough to my liking
You think wood glue in your pizza sauce is great? Try prions!
Have you ever thought to yourself “I wish I could read Yud’s Logorrhea but in the form of a boring yet pretentious cartoon-- like a Rationalist Cinematic Universe!”?
Well boy oh boy do I have a link for you:
https://xcancel.com/ESYudkowsky/status/1832452673867546802
https://www.youtube.com/watch?v=fVN_5xsMDdg
TBH I thought the whole star blinking plot point was kind of neat when I was a teenager, but thought the story got a bit muddled by the end. Of course at the time I was trying to read it as a sci-fi story and not a P(doom) propaganda piece. My mistake.
I didn’t want to watch the cartoon because I thought I could just skim the story faster, and that’s how I read the “Hurr durr AI can derive general relativity in three frames, nothing personnel, kid” story in full for the first time. It sucks that people didn’t nip Yud in the bud early enough by telling him he lacked sci-fi chops, though I suspect that wouldn’t have slowed him down at all.
The story itself is an allegory1 about AI processing information fast2. Yud wasn’t thinking of himself as a sci-fi writer when writing this; he probably thought he was the messiah delivering a sermon, which… is exactly how I’ve come to understand Yud anyway.
- the fact of which is explicitly spelt out in the middle third when it drops out of the narrative entirely to do so
- see hurr durr description in paragraph above
thoughts, in order:
- wow that’s an annoying narrator voice (but I guess you have to stay on brand?)
- “oh god, this shit again. but maybe I won’t have to endure much of it?”
- (timecode: 00:21) checks video runbar “nope.”, tab closed
so no surprise for this crowd, but remember all those reply guys who said Copilot+ would never be an issue cause it’d only work with the magical ARM chips with onboard AI accelerators in Copilot+ PCs? well the fucking obvious has happened
“we couldn’t excite enough people to buy yet another windows arm machine that near-certainly won’t be market-ready for 3 years after its launch, so now we’re going to force this shit on everyone”
“wave 2” aka “sure, rebrand will totally fix it”
this shit’s starting to make me feel claustrophobic
come to Linux! we’ve got:
- pain
- the ability to create a fully custom working environment designed to your own specifications, which then gets pulled out from under you when the open source projects that you built your environment on get taken over by fucking fascists
- about 3 and a half months til Red Hat and IBM decide they’re safe to use their position to insinuate an uwu smol bean homegrown open source LLM model into your distro’s userland. it’s just openwashed Copilot+ and no you can’t disable it
- maybe AmigaOS on 68k was enough, what have we gained since then?
I’m actually still working on a project kinda related to this, but am currently in a serious “is this embarrassingly stupid?” stage because I’m designing something without enough technical knowledge to know what is possible but trying to keep focused on the purpose and desired outcome.
I can lend some systems expertise from my own tinkering if you need it! a lot of my designs never got out of the embarrassingly stupid stage (what if my init system was a Prolog runtime? what if it too was emacs?) but it’s all worth exploring
I ask you this hoping it isn’t insulting, but how are you with os kernel level stuff?
it’s not insulting at all! I’m not a Linux kernel dev by any means, but I have what I consider a fair amount of knowledge in the general area — OS design and a selection of algorithm implementations from the Linux kernel were part of what I studied for my degree, and I’ve previously written assembly boot and both C and Rust OS kernel code for x86, ARM, and MIPS. most of my real expertise is in the deeper parts of userland, but I might be able to give you a push in the right direction for anything internal to the kernel.
great! I’ll show you something soon hopefully and see what you think
what if my init system was a Prolog runtime?
Not only can you describe the desired system state and have your init figure out dependencies, you can list just the dependencies and have your init set up all possible system states until you find one to your liking!
what if it too was emacs?
Emacs as pid 1 is a classic of the genre, but a prolog too? Wouldn’t a Kanren make more sense or is elisp not good for that?
Sounds like the real horseshoe theory is that nerds of all kinds of heterodox political stripes will eventually reinvent/discover Lisp and get freaky with it. A common thread connecting at least RMS, PG, Eich, Moldbug, suzuran, jart, Aphyr, self and me.
Not only can you describe the desired system state and have your init figure out dependencies, you can list just the dependencies and have your init set up all possible system states until you find one to your liking!
exactly! the way I imagined it, service definitions would be purely declarative Prolog, mutable system state would be asserts on the Prolog in-memory factbase (and flexible definitions could be written to tie system state sources like sysfs descriptors to asserts), and service manager commands would just be a special case of the system state assert system. I’m still tempted to do this, but I feel like ordinary developers have a weird aversion to Prolog that’d doom the thing.
Emacs as pid 1 is a classic of the genre, but a prolog too? Wouldn’t a Kanren make more sense or is elisp not good for that?
this idea was usually separate from the Prolog init system, but it took a few forms — a cut-down emacs with a Lisp RPC connection to a session emacs (namely the one I use to manage my UI and as a window manager) (also, I made a lot of progress in using emacs as a weird but functional standalone app runtime) and elisp configuration, a declarative version of that implemented as an elisp miniKanren, and a few other weird iterations on the same theme.
Sounds like the real horseshoe theory is that nerds of all kinds of heterodox political stripes will eventually reinvent/discover Lisp and get freaky with it.
the common thread might boil down to an obsession with lambda calculus, I think
Fellas, my in laws gave me a roomba and it so cute I put googly eyes on it. I’m e/acc now
On bsky you are required to post proof of cat, here at e/acc you are required to post proof of googly roomba
Take a look w/ your own googly eyes
Even better than I had thought, I expected smaller eyes. Thanks, it is glorious. That smile.
please be very careful with the VSLAM (camera+sensors) ones, and note carefully that iRobot avoided responsibility for this by claiming the impacted people were testers (a claim the alleged testers appear to disagree with)
thanks for the tip! 🙏
e/vac
Oh yay my corporate job I’ve been at for close to a decade just decided that all employees need to be “verified” by an AI startup’s phone app for reasons: https://www.veriff.com/ Ugh I’d rather have random drug tests.
I don’t see the point of this app/service. Why can’t someone who is trusted at the company (like HR) just check ID manually? I understand it might be tough if everyone is fully remote but don’t public notaries offer this kind of service?
Notaries? Pah! They’re not even web scale. Now AI, now that’s web scale.
we have worldcoin at home
Our combination of AI and in-house human verification teams ensures bad actors are kept at bay and genuine users experience minimal friction in their customer journey.
what’s the point, then?
One or more of the following:
- they don’t bother with ai at all, but pretending they do helps with sales and marketing to the gullible
- they have ai but it is totally shit, and they have to mechanical turk everything to have a functioning system at all
- they have shit ai, but they’re trying to make it better and the humans are there to generate test and training data annotations
Am I understanding this right: this app takes a picture of your ID card or passport and the feeds it to some ML algorithm to figure out whether the document is real plus some additional stuff like address verification?
Depending on where you’re located, you might try and file a GDPR complaint against this. I’m not a lawyer but I work with the DSO for our company and routinely piss off people by raising concerns about whatever stupid tool marketing or BI tried to implement without asking anyone, and I think unless you work somewhere that falls under one of the exceptions for GDPR art. 5 §1 you have a pretty good case there because that request seems definitely excessive and not strictly necessary.
They advertise a stunning 95% success rate! Since it has a 9 and a 5 in the number it’s probably as good as five nines. No word on what the success rate is for transgender people or other minorities though.
As for the algorithm: they advertise “AI” and “reinforced learning”, but that could mean anything from good old fashioned Computer Vision with some ML dust sprinkled on top, to feeding a diffusion model a pair of images and asking it if they’re the same person. The company has been around since before the Chat-GPT hype wave.
Given thaty wife interviewed with a “digital AI assistant” company for the position of, effectively, the digital AI assistant well before the current bubble really took off, I would not be at all surprised if they kept a few wage-earners on staff to handle more inconclusive checks.
James Stephanie Sterling released a video tearing into the Doom generative AI we covered in the last stubsack. there’s nothing too surprising in there for awful.systems regulars, but it’s a very good summary of why the thing is awful that doesn’t get too far into the technical deep end.
steph also spends 20 minutes calling everyone involved a c*nt, which i mean fair
Skeleton warriors!
steph also spends 20 minutes calling everyone involved a c*nt
I mean, that’s every single episode, really
Alex Jones interviews ChatGPT: https://xcancel.com/RealAlexJones/status/1830063258088181941
I know what I’m listening to this evening. Knowledge Fight Link
those guys work fast!
I usually can’t stand radio-style podcasters but these guys are just too good. The way they play off each other is top notch.
oh man. I knew the guy was an idiot but hooooo damn he dumb
also the most of his content I’ve ever taken in, and that was some of the hardest I’ve laughed in a good while
Oh God, some days later he’s doing it again