• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: August 30th, 2023

help-circle



  • No it’s not. GPT-4 is nowhere near suitable for general interaction. It just isn’t.

    “Just do machine learning to figure out what a human would do and you’ll be able to do what a human does!!1!”. “Just fix it when it goes wrong using reinforcement learning!!11!”.

    GPT-4 has no structured concept of understanding. It cannot learn-on-the-fly like a human can. It is a stochastic parrot that badly mimics a the way that people on the internet talk, and it took an absurd amount of resources to get it to do even that. RL is not some magic process that makes a thing do the right thing if it does the wrong thing enough and it will not make GPT-4 a general agent.



  • Needling in on point 1 - no I don’t, largely because AI techniques haven’t surpassed humans in any given job ever :P. Yes, I am being somewhat provocative, but no AI has ever been able to 1:1 take over a job that any human has done. An AI can do a manual repetitive task like reading addresses on mail, but it cannot do all of the ‘side’ work that bottlenecks the response time of the system: it can’t handle picking up a telephone and talking to people when things go wrong, it can’t say “oh hey the kids are getting more into physical letters, we better order another machine”, it can’t read a sticker that somebody’s attached somewhere else on the letter giving different instructions, it definitely can’t go into a mail center that’s been hit by a tornado and plan what the hell it’s going to do next.

    The real world is complex. It cannot be flattened out into a series of APIs. You can probably imagine building weird little gizmos to handle all of those funny side problems I laid out, but I guarantee you that all of them will then have their own little problems that you’d have to solve for. A truly general AI is necessary, and we are no closer to one of those than we were 20 years ago.

    The problem with the idea of the singularity, and the current hype around AI in general, is a sort of proxy Dunning-Kruger. We can look at any given AI advance and be impressed but it distracts us from how complex the real world is and how flexible you need to be a general agent that actually exists and can interact and be interacted upon outside the context of a defined API. I have seen no signs that we are anywhere near anything like this yet.


  • I suppose my big annoying soapbox opinion is one I’ve had with every other accidentally useful application of ChagGPT - why the hell are we using LLMs to do this? Splitting text into paragraphs can surely be done with a much simpler NLP model (i.e. one that doesn’t require a GPU per user), and it’s not like speech-to-text is new.

    I imagine you could use a simple model with ‘good enough’ accuracy, and then have some basic keybinds to very quickly fix up the rest of it. Goto-next-paragraph, goto-previous-paragraph, move-sentence-forward, and move-sentence-backwards would do the job.

    N.B. The general process is cool though! I’m neurodivergent like you and I’ve spent a lot of time lately thinking about how to make the actual process of note taking as seamless as possible. I’ve found that reducing the barriers between me and the metaphorical paper has really increased how likely I am to write my thoughts down.