a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
veen  ·  11 days ago  ·  link  ·    ·  parent  ·  post: Happy 2nd Birthday, Chat GPT!

I've been thinking about this over the weekend and I can't put my finger on exactly where (or if) I disagree, so I'm gonna write to sharpen my thinking, if you'll humour me.

Maybe the biggest disagreement is that I don't understand what classifies as world-changing to you. How can you both believe NFTs are already changing the world when the largest use case is improving the bottom line of specific luxury brands through destroying their grey market, yet not seeing LLMs make a dent in the universe?

I mean - are LLMs world-changing to the tune of the $200B put into it or to NVDIA's market cap? Hell no. Is AGI around the corner? Also no. Are LLMs world-changing at all? Well, I'd put it at 'somewhere between YouTube and smartphones'. They'll make a bunch of things worse, they'll make a bunch of things better, like any powerful tool. But because it's a tool that can so directly influence the core of knowledge work, I find it very hard to believe it won't change things at all like you seem to suggest. Whether that means the big corporations are making smart decisions about investments AI is an exercise left to the reader.

It's not that I'm not aware that LLMs are, to a large and arguably frightening degree, a bullshit machine. You know how much I hate Tesla's """self-driving""" for the exact same reason. I will not trust LLM output for anything with any serious consequences, just like I will never have Musk take the wheel so I can nap. But as a co-intelligence? Microsoft was at least somewhat on point by naming it Copilot. The copilot can do useful things but I'm still piloting this thing, I'm still making the decisions to the degree that I want to make them.

The guy behind NotebookLM makes the point that the useful thing about LLMs isn't just the model, but also its ability to combine that with a context window that is getting so large it's exceeding what most of us are capable of. It can see a needle in a haystack but, more importantly, it can see the entire haystack. Hallucinations are much rarer when the text is right there, so it's also markedly more accurate and can cite shit.

I was discussing LLMs with my FIL the other day. He has been writing reports on construction failures for almost three decades now. All his work, and that of his colleagues going back five decades, is digitized into an archive of searchable text. But he said it's hard to use in his desk research because he can only search on keywords and those keywords change a lot over time. My FIL also has a hard time getting started with a new report - not because he doesn't know his shit, but because he's not the best at structuring his thoughts into a logical, linearly ordered line of reasoning. I'm 100% sure his company would pay a lot for a suite of LLM tools to enable him and his colleagues to supercharge their desk research. Not to replace their expertise but to enhance it, by having an LLM surface decades of knowledge in a way that isn't possible now, combining it with relevant information in a case, and then also helping them make use of it.

Now multiply this use case for all academics and knowledge workers around. We still need professionals in the loop, just like I still need to be in control in a Tesla. We need expertise to tell bullshit from fact. That doesn't mean LLMs are doomed to be useless toys - even when I, too, use it as a toy now from time to time.