a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
veen  ·  17 days ago  ·  link  ·    ·  parent  ·  post: Happy 2nd Birthday, Chat GPT!

You know, WSJ, that article would’ve been a hundred times more interesting had you asked power users, researchers, people actually at the bleeding edge that question instead of Tom who likes to admit he’s cutting corners for his daughter’s wedding speech.

Two years feels like too long to really see where we’re going. To draw a parallel to smartphones, I’m undecided whether we’re still in the novelty iBeer phase of this technological change or have reached the Fruit Ninja phase of largely mass adoption.

The end stage, largely awful but hard to avoid TikTok stage of LLMs is still beyond reach I think. Even if the models themselves are already plateauing or would stop improving, there will be steps forward that I expect will be made and ways in which this will end up improving / enticing / sucking that we can’t foresee yet. I don’t believe this is it. How could it be it when we’ve only so recently unlocked the ability to mass-produce, mass-fake whatever we want in nearly every medium we want?

I recently played with NotebookLM, which has the ability to generate a listenable podcast out of any text you throw at it. I gave it a long, detailed report I know dearly and had just made a presentation about that week. It picked the same framing device that I did in my presentation, without said framing device ever being hinted at in the source material. That blew my mind. Even a fucking Suno song from the newest model version I heard the other day sounded passable, instead of peak cringe that must be exterminated with fire. Just today I saw some Midjourney photos that fooled me.

When’s the last time we were talking about six fingered people? Because it’s been a while, and I’ve resorted to accept that even the best of us won’t be able to tell the difference most of the time sooner rather than later. Right now it requires a modicum of thought and expertise and a boatload of tolerance for shitty UIs to get really compelling results, but that bar is going to lower and lower until even Judith can make candle labels that don’t trigger my AI slop spidey senses.

Whether that all translates into the professional world is harder to guess. I’m already seeing the beginnings of enshittification in, say, consultancy work which I expect to spread to all other types of knowledge work. I think the next push will not be from better models, but from better strings of models. That’s how NotebookLM works behind the scenes; it’s a string of models that first summarize, then stylize, then add uuh’s & aah’s, and then feed it into a speech synthesis model. Similar stacks will become normal for knowledge work. You plug your clients demands into an expensive and slow architecting LLM trained on all earlier reports, it writes a detailed outline for cheaper/larger context models to write, each part is fed to cheaper models to iterate over and then it’s all fed through a half dozen style, language, prose, models. Rawdogging ChatGPT is for losers (sorry, WSJ), prompt engineering in the right model is easily the best way to double the quality, but stacking multiple well-prompted models will be the next flurry of development if I were to bet on it.

I think one of the biggest risks will be the inherent biases in every model, the way its misdirection will be subtle as a whisper. A thought I had the other day - to what degree does, say, Squarespace exist in transformer models? Because their omnipresence in every podcast and YouTube essayist under the sun has to be a factor, the same way Whisper’s text-to-speech will hallucinate “don’t forget to like and subscribe” often because it’s been trained on YouTube transcriptions. Squarespace is a harmful omnipresence, but it’s not a far stretch to imagine all the other ways in which dominant groups or interests might influence models which in turn influences how we think or write or interact with each other. It could be that our thinking gets pushed a dozen …-ist ways at a time. It could be that laziness becomes the norm in everything we write and think. I’m not looking forward to finding that out after we now ostensibly fuck around.

There might be a way in which AI models will separate experts from the rest of the world. Why would any knowledge work based job hire an intern anymore? Why would they hire someone fresh outta college? I can think of a long list of work I did in my first years that I see completely being overtaken by AI already, or eventually. Which means that future experts need to either find a way to grind domain knowledge on their own, or spend much longer in academia / training. Because how else are you gonna learn how to discern the truth in your field from hallucinations?

I’m curious whether your thoughts have changed about the longer term consequences of this - what does this look like in a decade to you?