• 2 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: October 2nd, 2023

help-circle
  • If I wanted to sound like a rationalist I’d tell Scott to check his fallacies, specifically category error. It’s just such basic, wilful misconstrual on his part. Yeah, me liking my spaghetti quite salty doesn’t mean I want to add salt to the dessert!

    That’s all besides the original point being that a rigged system is one where the best do not rise to the top, so even if our socioeconomic system and… Starcraft streamers (lol) were comparable categorically, which shouldn’t have to be said they in any way aren’t, the OG point is precisely that so much talent goes underutilized and glory unrealized due to a lack of broad cultivation and opportunity.

    I don’t get what makes people this way, with such small souls, just painstakingly intent on being miserly. Same thing with JK Rowling, she has all the money in the world to have the wildest pleasures or to leave everything and go off to some yurt for a spiritual search and instead she just purposefully acts in the most destructive and self-constricting manner. And this applies more generally to the awash-in-cash techbro and rationalist sets as well. You have the resources to do really interesting things, and yet you dedicate your time to making Juiceros.


  • Amazing quote he included from Tyler Cowen:

    If you are ever tempted to cancel somebody, ask yourself “do I cancel those who favor tougher price controls on pharma? After all, they may be inducing millions of premature deaths.” If you don’t cancel those people — and you shouldn’t — that should broaden your circle of tolerance more generally.

    Yes leftists, you not cancelling someone campaigning for lower drug prices is actually the same as endorsing mass murder and hence you should think twice before cancelling sex predators. It’s in fact called ephebophilia.

    What the globe emoji followed with is also a classic example of rationalists getting mesmerized by their verbiage:

    What I like about this framing is how it aims to recalibrate our sense of repugnance in light of “scope insensitivity,” a deeply rooted cognitive bias that occurs “when the valuation of a problem is not valued with a multiplicative relationship to its size.”





  • Where did you get that impression from? He says himself he is not advocating against aid per se, but that its effects should be judged more holistically, e.g. that organizations like GiveWell should also include the potential harms alongside benefits in their reports. The overarching message seems to be one of intellectual humility – to not lose sight that the ultimate aim is to help another human being who in the end is a person with agency just like you, not to feel good about yourself or to alleviate your own feelings of guilt.

    The basic conceit of projects like EA is the incredible high of self-importance and moral superiority one can get blinded by when one views themselves as more important than other people by virtue of helping so many of them. No one likes to be condescended to; sure, a life saved with whatever technical fix is better than a life lost, but human life is about so much more than bare material existence – dignity and freedom are crucial to a good life. The ultimate aim should be to shift agency and power into the hands of the powerless, not to bask in being the white knight trotting around the globe, saving the benighted from themselves.




  • the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay.

    To the extent that LLMs actually make programming more “productive”, isn’t the situation analogous to the way the power loom was bad for skilled handweavers whilst making textiles more affordable for everyone else?

    I should perhaps say that I’m saying this as someone who is just starting out as a web developer (really chose the right time for that, hah). I try to avoid LLMs and even strictly unnecessary libraries for now because I like learning about how everything works under the hood and want to get an intimate grasp of what I’m doing, but I can also see that ultimately that’s not what people pay you for that and that once you’ve built up sufficient skill to quickly parse LLM output, the demands of the market may make using them unavoidable.

    To be honest, I feel as conflicted & anxious about it all as others already mentioned. Maybe I am just too green to fully understand the value that I would eventually bring, but can I really, in good conscience, say that a customer should pay me more when someone else can provide a similar product that’s “good enough” at a much lower price?

    Sorry for being another bummer. :(