Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    today in capitalism: landlords are using an AI tool to collude and keep rent artificially high

    But according to the U.S. government’s case, YieldStar’s algorithm can drive landlords to collude in setting artificial rates based on competitively-sensitive information, such as signed leases, renewal offers, rental applications, and future occupancy.

    One of the main developers of the software used by YieldStar told ProPublica that landlords had “too much empathy” compared to the algorithmic pricing software.

    “The beauty of YieldStar is that it pushes you to go places that you wouldn’t have gone if you weren’t using it,” said a director at a U.S. property management company in a testimonial video on RealPage’s website that has since disappeared.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    every popular scam eventually gets its Oprah moment, and now AI’s joining the same prestigious ranks as faith healing and A Million Little Pieces:

    Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”

    and it’s got everything you love! veiled threats to your job if the AI “revolution” does or doesn’t get its way!

    As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.”

    woe is Sam, nobody understands the incredible stress he’s under marketing the scam that’s making him rich as simultaneously incredibly dangerous but also absolutely essential

    fuck I cannot wait for my mom to call me and regurgitate Sam’s words on “how AI works” and ask, panicked, if I’m fired or working for OpenAI or a cyborg yet

    I’m truly surprised they didn’t cart Yud out for this shit

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        unironically part of why I am so fucking mad that reCaptcha ever became as big as it did. the various ways entities like cloudflare and google have forcefully inserted themselves into humanity’s daily lives, acting as rent-extracting bridgetroll with heavy “Or Else” clubs, incenses me to a degree that can leave me speechless

        in this particular case, because reCaptcha is effectively outsourced dataset labelling, with the labeller (you, the end user, having to click through the stupid shit) not being paid. and they’ll charge high-count users for the privilege. it is so, so fucking insulting and abusive.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          I always half-ass my captcha and try to pass in as many false answers as possible, because I’m a rebel cunt.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      I’m truly surprised they didn’t cart Yud out for this shit

      Self-proclaimed sexual sadist Yud is probably a sex scandal time bomb and really not ready for prime time. Plus it’s not like he has anything of substance to add on top of Saltman’s alarmist bullshit, so it would just be reminding people how weird in a bad way people in this subculture tend to be.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        that’s a very good point. now I’m wondering if not inviting Yud was a savvy move on Oprah’s part or if it was something Altman and the other money behind this TV special insisted on. given how crafted the guest list for this thing is, I’m leaning toward the latter

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          15 days ago

          I think if you want to promote something you don’t invite the longwinded nerdy person. Don’t think a verbal blog post would do well on tv. I mean, I would also suck horribly if I was on tv, and would prob help make the subject im arguing for less popular.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”

      christ

      billy g’s been going for years with bad takes on those three things (to the point that the gates foundation have actually been a problem, gatekeeping financing unless recipients acquiesce to using those funds the way the foundation wants it to be used (yeah, aid funds with instructions and limitations…)), but now there can be “AI” to assist with the issue

      maybe the “revolution” can help by paying the people that are currently doing dataset curation for them a living wage? I’m sure that’s what billy g meant, right? right?

      • -dsr-@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        No wristwatch, but I have glasses and without electricity I stop breathing. (While asleep.)

        So, yeah, cyborg.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      I wasn’t sure so I asked chatgpt. The results will shock you! Source

      Image description

      Image that looks like a normal chatgpt prompt.

      Question: Is 9 september a sunday?

      Answer: I’m terribly sorry to say this, but it turns out V0ldek is actually wrong. It is a sunday.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        (I had no idea there were sites which allowed you to fake chatgpt conversations already btw, not that im shocked).

  • slopjockey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    This is barely on topic, but I’ve found a spambot in the wild. I know they’re a dime a dozen, but I wanted to take a deep dive.

    https://www.reddit.com/user/ChiaPlotting/

    It blew its load advertising a resume generator or something bullshit across hundreds of subs. Here’s an example post. The account had a decent amount of karma, that stood out to me. I’m pretty old school, so I thought someone just sold their account. Right? Wrong. All the posts are ChatGPT generated! Read in sequence, all the karma farm posts are very clearly AI generated, but individually they’re enticing enough that they get a decent amount of engagement: “How I eliminated my dent with the snowball method”, “What do you guys think of recent Canadian immigration 🤨” both paraphrased.

    This guy isn’t anonymous, and he seemingly isn’t profiting off the script that he’s hawking. His reddit account leads to his github leads to his LinkedIn which mentions his recent graduation and his status as the co-founder of some blockchain bullshit. I have no interest in canceling or doxxing him, I just wanted to know what type of person would create this kind of junk.

    The generator in question, that this man may have unknowingly destroyed his reddit account to advertise, is under the MIT license. It makes you wonder WHY he went to all this trouble.

    I want to clone his repo and sniff around for data theft; the repo is 100% percent python, so unless he owns any of the modules being imported the chance of code obfuscation is low. But after seeing his LinkedIn I don’t think this guy’s trying to spread malware; I think he took a big, low fiber shit aaaaalll over reddit as an earnest attempt at a resume builder.

    Personally, I find that so much stranger than malice. 🤷‍♂️

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      the username makes me think the account started its life shilling for the chia cryptocurrency (the one that spiked storage prices for a while cause it relied on wearing out massive numbers of SSDs, before its own price fell so low people gave up on it), but I don’t know how to see an account’s oldest posts without going in through the defunct API

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Maybe hot take, but when I see young people (recent graduation) doing questionable things in pursuit of attention and a career, I cut them some slack.

      Like it’s hard for me to be critical for someone starting off making it in, um, gestures about this, world today. Besides, they’ll get the sense knocked into them through pain and tears soon enough.

      I don’t find it strange or malice, I find it as symptom of why it was easier for us to find honest work then, and harder for them now.

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        I don’t know man, there are plenty of jobs that don’t involve any of whatever that is, like line cook or caregiver or going on disability.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          Also he’s a programmer? You can find a Python job that isn’t, you know, this bullshit.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      17 days ago

      Dunno what’s worse, that he’s thirstily comparing his shitty writing to someone famous, or that that someone is fucking Hayek.

      Knowing who he follows the unclear point of Hayek was probably “is slavery ok actually”

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        I suspect that for every subject that Yud has bloviated about, one is better served by reading the original author that Yud is either paraphrasing badly (e.g., Jaynes) or lazily dismissing with third-hand hearsay (e.g., Bohr).

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          I think HPMOR also still needs a content warning for talking about sexual assault. Weird how that is a pattern.

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            A quick xcancel search (which is about all the effort I am willing to expend on this at the moment) found nothing relevant, but it did turn up this from Yud in 2018:

            HPMOR’s detractors don’t understand that books can be good in different ways; let’s not mirror their mistake.

            Yea verily, the book understander has logged on.

            • blakestacey@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              Another thing I turned up and that I need to post here so I can close that browser tab and expunge the stain from my being: Yud’s advice about awesome characters.

              I find that fiction writing in general is easier for me when the characters I’m working with are awesome.

              The important thing for any writer is to never challenge oneself. The Path of Least Resistance™!

              The most important lesson I learned from reading Shinji and Warhammer 40K

              What is the superlative of “read a second book”?

              Awesome characters are just more fun to write about, more fun to read, and you’re rarely at a loss to figure out how they can react in a story-suitable way to any situation you throw at them.

              “My imagination has not yet descended.”

              Let’s say the cognitive skill you intend to convey to your readers (you’re going to put the readers through vicarious experiences that make them stronger, right? no? why are you bothering to write?)

              In college, I wrote a sonnet to a young woman in the afternoon and joined her in a threesome that night.

              You’ve set yourself up to start with a weaksauce non-awesome character. Your premise requires that she be weak, and break down and cry.

              “Can’t I show her developing into someone who isn’t weak?" No, because I stopped reading on the first page. You haven’t given me anyone I want to sympathize with, and unless I have some special reason to trust you, I don’t know she’s going to be awesome later.

              Holding fast through the pain induced by the rank superficiality, we might just find a lesson here. Many fans of Harry Potter have had to cope, in their own personal ways, with the stories aging badly or becoming difficult to enjoy. But nothing that Rowling does can perturb Yudkowsky, because he held the stories in contempt all along.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      This holiday season, treat your loved ones to the complete printed set* of the original Yudkowsky for the low introductory price of $1,299.99. And if you act now, you’ll also get 50% off your subscription to the exciting new upcoming Yudkowsky, only $149 per quarter!

      *This fantastic deal made possible by our friends at Amazon Print-on-Demand. Don’t worry, they’re completely separate from the thoughtless civilization-killers in the AWS and AI departments whom we have taught you to fear and loathe

      (how far are we from this actually happening?)

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        This reminded me, tangentially, of how there used to be two bookstores in Cambridge, MA that both offered in-house print-on-demand. But apparently the machines were hard to maintain, and when the manufacturer went out of business, there was no way to keep them going. I’d used them for some projects, like making my own copies of my PhD thesis. For my most recent effort, a lightly revised edition of Calculus Made Easy, I just went with Lulu.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          17 days ago

          yuh it’s basically the stuff Kindle Print or Lulu or Ingram use. (Dunno if they still do, but in the UK Amazon just used Ingram.)

          Cheap hack: put your book on Amazon at a swingeing price, order one (1) author copy at cost

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 days ago

      saw you already got two answers, another answer: medium’s stupid popover blocker is based on a counter value in a cookie that you could can blow up yourself (or get around with instance windows)

      I am a very big fan of the Fx Temporary Containers extension

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        I didn’t even know about the temporary containers extension. that’ll be very useful for so much stuff. Thanks as well!

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          yeah for some reason it’s not very well known, which is why I tell people about it. I’m 90% done with my months-ago-promised browser post, and should have it up soon

          couple last-minute irks came up recently as I was doing some stuff, so now I’m trying to figure out whether those have answers or not…

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    oh hey, we’re back to “deepmind models dreamed up some totally novel structures!”, but proteins this time! news!

    do we want to start a betting pool for how long it’ll take 'em to walk this back too?

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      it’s weird how they’re pumping this specific bullshit out now that a common talking point is “well you can’t say you hate AI, because the non-generative bits do actually useful things like protein folding”, as if any of us were the ones who chose to market this shit as AI, and also as if previous AI booms weren’t absolutely fucking turgid with grifts too

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        given the semi-known depth of google-lawyer-layering, I suspect this presser got put together a few weeks prior

        not that I’m gonna miss an opportunity to enjoy it landing when it does, mind you

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        I suspect it’s a bit of a tell that upcoming hype cycles will be focused on biotech. Not that any of these people writing checks have any more of a clue about biotech than they do about computers.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          sounds to me a bit like crypto gaming, as in techbros trying to insert themselves as middlemen in a place that already has money, because they realized that they can’t turn profit on their own

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      Haven’t read the whole thing but I do chuckle at this part from the synopsis of the white paper:

      […] Our results suggest that AlphaProteo can generate binders “ready-to-use” for many research applications using only one round of medium-throughput screening and no further optimization.

      And a corresponding anti-sneer from Yud (xcancel.com):

      @ESYudkowsky: DeepMind just published AlphaProteo for de novo design of binding proteins. As a reminder, I called this in 2004. And fools said, and still said quite recently, that DM’s reported oneshot designs would be impossible even to a superintelligence without many testing iterations.

      Now medium-throughput is not a commonly defined term, but it’s what DeepMind seems to call 96-well testing, which wikipedia just calls the smallest size of high-throughput screening—but I guess that sounds less impressive in a synopsis.

      Which as I understand it basically boils down to “Hundreds of tests! But Once!”.
      Does 100 count as one or many iterations?
      Also was all of this not guided by the researchers and not from-first-principles-analyzing-only-3-frames-of-the-video-of-a-falling-apple-and-deducing-the-whole-of-physics path so espoused by Yud?
      Also does the paper not claim success for 7 proteins and failure for 1, making it maybe a tad early for claiming I-told-you-so?
      Also real-life-complexity-of-myriads-and-myriads-of-protein-and-unforeseen-interactions?

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        13 days ago

        As a reminder, I called this in 2004.

        that sound you hear is me pressing X to doubt

        Yud in the replies:

        The essence of valid futurism is to only make easy calls, not hard ones. It ends up sounding prescient because most can’t make the easy calls either.

        “I am so Alpha that the rest of you do not even qualify as Epsilon-Minus Semi-Morons”

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        i suspect - i don’t know, but suspect - that it’s really leveraging all known protein structures ingested by google and it’s cribbing bits from what is known, like alphafold does to a degree. i’m not sure how similar are these proteins to something else, or if known interacting proteins have been sequences and/or have had their xrds taken, or if there are many antibodies with known sequences that alphaproteo can crib from, but some of these target proteins have these. actual biologist would have to weigh in. i understand that they make up to 96 candidate proteins, then they test it, but most of the time less and sometimes down to a few, which suggests there are some constraints. (yes this counts as one iteration, they’re just taking low tens to 96 shots at it.) is google running out of compute? also, they’re using real life xrd structures of target proteins, which means that 1. they’re not using alphafold to get these initial target structures, and 2. this is a mildly serious limitation for any new target. and yeah if you’re wondering there are antibodies against that one failed target, and more than one, and not only just as research tools but as approved pharmaceuticals

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        wait that’s just antibodies with extra steps

        living things literally are just fuzzing it until something sticks and it works

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        but but proteins! surely they’ve got it right this time! /s

        (I wondered what you’d say when I saw this. I can only imagine how exhausting)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Have you ever thought to yourself “I wish I could read Yud’s Logorrhea but in the form of a boring yet pretentious cartoon-- like a Rationalist Cinematic Universe!”?

    Well boy oh boy do I have a link for you:

    https://xcancel.com/ESYudkowsky/status/1832452673867546802

    https://www.youtube.com/watch?v=fVN_5xsMDdg

    TBH I thought the whole star blinking plot point was kind of neat when I was a teenager, but thought the story got a bit muddled by the end. Of course at the time I was trying to read it as a sci-fi story and not a P(doom) propaganda piece. My mistake.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      https://xcancel.com/ESYudkowsky/status/1832452673867546802

      I didn’t want to watch the cartoon because I thought I could just skim the story faster, and that’s how I read the “Hurr durr AI can derive general relativity in three frames, nothing personnel, kid” story in full for the first time. It sucks that people didn’t nip Yud in the bud early enough by telling him he lacked sci-fi chops, though I suspect that wouldn’t have slowed him down at all.

      The story itself is an allegory1 about AI processing information fast2. Yud wasn’t thinking of himself as a sci-fi writer when writing this; he probably thought he was the messiah delivering a sermon, which… is exactly how I’ve come to understand Yud anyway.

      1. the fact of which is explicitly spelt out in the middle third when it drops out of the narrative entirely to do so
      2. see hurr durr description in paragraph above
    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      thoughts, in order:

      1. wow that’s an annoying narrator voice (but I guess you have to stay on brand?)
      2. “oh god, this shit again. but maybe I won’t have to endure much of it?”
      3. (timecode: 00:21) checks video runbar “nope.”, tab closed
    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      “we couldn’t excite enough people to buy yet another windows arm machine that near-certainly won’t be market-ready for 3 years after its launch, so now we’re going to force this shit on everyone

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        come to Linux! we’ve got:

        • pain
        • the ability to create a fully custom working environment designed to your own specifications, which then gets pulled out from under you when the open source projects that you built your environment on get taken over by fucking fascists
        • about 3 and a half months til Red Hat and IBM decide they’re safe to use their position to insinuate an uwu smol bean homegrown open source LLM model into your distro’s userland. it’s just openwashed Copilot+ and no you can’t disable it
        • maybe AmigaOS on 68k was enough, what have we gained since then?
        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          I’m actually still working on a project kinda related to this, but am currently in a serious “is this embarrassingly stupid?” stage because I’m designing something without enough technical knowledge to know what is possible but trying to keep focused on the purpose and desired outcome.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            I can lend some systems expertise from my own tinkering if you need it! a lot of my designs never got out of the embarrassingly stupid stage (what if my init system was a Prolog runtime? what if it too was emacs?) but it’s all worth exploring

            • Steve@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              16 days ago

              I ask you this hoping it isn’t insulting, but how are you with os kernel level stuff?

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                16 days ago

                it’s not insulting at all! I’m not a Linux kernel dev by any means, but I have what I consider a fair amount of knowledge in the general area — OS design and a selection of algorithm implementations from the Linux kernel were part of what I studied for my degree, and I’ve previously written assembly boot and both C and Rust OS kernel code for x86, ARM, and MIPS. most of my real expertise is in the deeper parts of userland, but I might be able to give you a push in the right direction for anything internal to the kernel.

            • bitofhope@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 days ago

              what if my init system was a Prolog runtime?

              Not only can you describe the desired system state and have your init figure out dependencies, you can list just the dependencies and have your init set up all possible system states until you find one to your liking!

              what if it too was emacs?

              Emacs as pid 1 is a classic of the genre, but a prolog too? Wouldn’t a Kanren make more sense or is elisp not good for that?

              Sounds like the real horseshoe theory is that nerds of all kinds of heterodox political stripes will eventually reinvent/discover Lisp and get freaky with it. A common thread connecting at least RMS, PG, Eich, Moldbug, suzuran, jart, Aphyr, self and me.

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                15 days ago

                Not only can you describe the desired system state and have your init figure out dependencies, you can list just the dependencies and have your init set up all possible system states until you find one to your liking!

                exactly! the way I imagined it, service definitions would be purely declarative Prolog, mutable system state would be asserts on the Prolog in-memory factbase (and flexible definitions could be written to tie system state sources like sysfs descriptors to asserts), and service manager commands would just be a special case of the system state assert system. I’m still tempted to do this, but I feel like ordinary developers have a weird aversion to Prolog that’d doom the thing.

                Emacs as pid 1 is a classic of the genre, but a prolog too? Wouldn’t a Kanren make more sense or is elisp not good for that?

                this idea was usually separate from the Prolog init system, but it took a few forms — a cut-down emacs with a Lisp RPC connection to a session emacs (namely the one I use to manage my UI and as a window manager) (also, I made a lot of progress in using emacs as a weird but functional standalone app runtime) and elisp configuration, a declarative version of that implemented as an elisp miniKanren, and a few other weird iterations on the same theme.

                Sounds like the real horseshoe theory is that nerds of all kinds of heterodox political stripes will eventually reinvent/discover Lisp and get freaky with it.

                the common thread might boil down to an obsession with lambda calculus, I think

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I don’t see the point of this app/service. Why can’t someone who is trusted at the company (like HR) just check ID manually? I understand it might be tough if everyone is fully remote but don’t public notaries offer this kind of service?

    • Steve@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Our combination of AI and in-house human verification teams ensures bad actors are kept at bay and genuine users experience minimal friction in their customer journey.

      what’s the point, then?

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        One or more of the following:

        • they don’t bother with ai at all, but pretending they do helps with sales and marketing to the gullible
        • they have ai but it is totally shit, and they have to mechanical turk everything to have a functioning system at all
        • they have shit ai, but they’re trying to make it better and the humans are there to generate test and training data annotations
    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      Am I understanding this right: this app takes a picture of your ID card or passport and the feeds it to some ML algorithm to figure out whether the document is real plus some additional stuff like address verification?

      Depending on where you’re located, you might try and file a GDPR complaint against this. I’m not a lawyer but I work with the DSO for our company and routinely piss off people by raising concerns about whatever stupid tool marketing or BI tried to implement without asking anyone, and I think unless you work somewhere that falls under one of the exceptions for GDPR art. 5 §1 you have a pretty good case there because that request seems definitely excessive and not strictly necessary.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        13 days ago

        They advertise a stunning 95% success rate! Since it has a 9 and a 5 in the number it’s probably as good as five nines. No word on what the success rate is for transgender people or other minorities though.

        As for the algorithm: they advertise “AI” and “reinforced learning”, but that could mean anything from good old fashioned Computer Vision with some ML dust sprinkled on top, to feeding a diffusion model a pair of images and asking it if they’re the same person. The company has been around since before the Chat-GPT hype wave.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Given thaty wife interviewed with a “digital AI assistant” company for the position of, effectively, the digital AI assistant well before the current bubble really took off, I would not be at all surprised if they kept a few wage-earners on staff to handle more inconclusive checks.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    James Stephanie Sterling released a video tearing into the Doom generative AI we covered in the last stubsack. there’s nothing too surprising in there for awful.systems regulars, but it’s a very good summary of why the thing is awful that doesn’t get too far into the technical deep end.