There is no evidence whatsoever that it scales in anything but an asymptotic fashion, and that the asymptote is far below the level where it threatens expertise.
I’m not sure of that. People love to say that about technology. The problem here is that the humans being replaced are pretty darn expensive and depending on the application, it’s probably going to save money on an order of x/5 just by getting rid of actors and actresses for films. That’s before considering things like cameramen, directors, writers, and crew to set up and take down sets. With sufficient resources, I don’t think you could easily tell the difference between a mid-budget TV show made this way and perhaps voice-acted (or maybe AI can do that too, not sure yet) given just how good video game graphics are already. And if I can make my sci-fi show for 1/10th the cost by not needing actors or a big crew, then I can put more money into writing and I don’t even need the same sized audience as other shows.
People aren't going to watch AI. Edison did everything he could to keep actor's names and faces out of his early films. He knew as soon as there were recognizable actors in film, they would absolutely dominate the medium the same way they absolutely dominated stage. No one is going to watch "AI football player" sell you FanDuel. They're going to watch Tom Brady. Tom Brady is going to cost you $1.5m so why are you fiddlefucking around with a bunch of bullshit AI anything? Set aside the fact that you can't - SAG struck for four months to make sure that every human shown in a Hollywood movie or TV show is an actual human making an actual $125 a day. Every dumb shit on reality television is making at least $125 a day because that's the contract. Every dumb shit behind the camera (raises hand) is making a fuckton more than that because that's the contract. That contract says "no AI, not anywhere, not ever." So sure. You can watch Skibidi Toilet. But out here in the real world you're going to watch humans filmed by humans. Your argument boils down to a basic lack of comprehension of an entire industry.
People watch machinema and play video games with hours of cutscenes. If people were okay with animation, machinema, game cutscenes and so on before AI, they aren’t going to reject a film because it doesn’t have real actors. We watched this (https://youtu.be/jzQPYuwzwH8?si=FCsQoM2IE797BgQR) in 2000. I dare say that AI could produce something this good within five years. In fact the fact that SAG has to fight so hard to prevent such a thing tells me exactly how scared they are of it. You don’t fight to ban things they you don’t think can take over your industry, you fight the things you fear will. If AI can’t do anything to threaten the livelihoods of people making movies and TV why was it critical that all production stop for weeks to make absolutely positively sure that no AI will ever be used to make an American movie? And what happens when other countries don’t honor that ban? If I make an AI show in France using no SAG has. No say. And it might cost a tenth of the cost to use real actors and crews.
Machinima is made by people. Cutscenes are made by people. This is a list of every human who worked on Final Fantasy X. You're extrapolating "400 people worked on this thing in 2000" to "no one will work on anything in 2030" based squarely on your naiive and uninformed conception of the process of creating filmed entertainment. Here, let's play a game: This is the list of people who worked on Snow White in 1937. And this is the list of people who worked on Frozen II in 2019. I think if you compare those three lists in chronological order, you will find that modern animation takes more people, not less, and that the trend is such that all of Los Angeles will be working on Frozen 5 by 2063. SAG killed AI because the AMPTP wanted the right to scan an actor once and use them as a digital extra forever without paying dues, wages or royalties (just as an aside - "extra" is an uncredited role, so if Frozen 5 has extras, they'll have to come from San Diego). SAG fought this because every star you've ever seen in the theater played an extra for ramen money at some point and without the ramen money there's no Hollywood. You could have Googled that - but then you might have accidentally learned something. Just like what happens if you make an AI show in France - Netflix won't carry it, Canal Plus won't carry it, nobody will carry it because they're all signatories to the same contracts. In general? If you don't know anything about the subject, and the situation doesn't make sense to you, it's a sign you need to research the subject, not that everyone who knows anything about it is an idiot. I know something about this subject. Animation I've worked on has racked up over a billion views on Youtube. And as you've likely noticed, I'll freely share well past the point anyone else cares. My one word of advice is that if I've made assertions, it's likely because I'm confident in my knowledge of the subject, and that confidence is generally well-earned.
And these are the exact same stupid “it will never happen to MY industry” horseshit that has happened to every industry just before it got automated away. Nobody thought that computers would mean the death of stores, until they enabled people to shop from home and get it delivered. Robots were never supposed to replace workers in restaurants, except now even mid scale restaurants have discovered that it much cheaper to put a Wi-Fi enabled iPad on the table than pay a human to take your order. They pay one person to take the food out to all the tables. They reduce headcount and make more money. AI is taking over a lot of office jobs now too. But don’t worry, your industry is specialer than every other job that’s ever been automated away. I mean we NEED mailroom staff, because all the people who work in offices started in the mailroom (in the 1980s) except now there hasn’t been a mailroom since 1990s because people realized that they could reduce their labor costs by using emails instead of inter office memos hand delivered by humans.
Dude we had this discussion just a couple days ago: Are you arguing that my direct and existential experience with exactly this issue somehow disqualifies my opinion? To the contrary - EVERYONE thought Amazon was coming for their livelihood they just knew there was nothing they could do about it. Barnes & Noble was blocked from buying Ingram because it would have created a vertical monopoly; Amazon was allowed to eat everyone's lunch because they didn't have stores. The first time I read about the downfall of cheap service was in Newsweek in 1987. There's kiosks and there's table service and I think you will find that aside from the pandemic, hospitality employment has been growing steadily since WWII. McDonald's is definitely employing fewer workers per store but that's never really been considered an overly-desirable job and really - what have we lost? Which ones? I recognize that my experience is a count against me but I've got more employees than fingers at this point. How much of your payroll have you farmed out to AI? ...is this a Sammy Glick thing? What are you getting at, exactly? Let's back up a minute: I pointed out that it takes hundreds of skilled individuals to make a movie and you came back with - retail - fast food - mail sorting And you came back maaaaaaad. Once more with feeling: Izotope came out with a plugin called "total mix" in 2011 or 2012. Theoretically it would take your shitty Discovery Channel audio and magically tweak it so that it sounded like a TV show. It was pretty comical; a lot of us beta-test for Izotope and that one was something they didn't even tell us about because... you know. We would have been mad. It was okay though because instead they unleashed it on a bunch of editors who hate us anyway because we insist we need annoying things like "time" and "money" to make their pretty videos sound like television so Izotope didn't need us anymore anyway. Except the editors tried Total Mix and came back with "what is this hickory-roasted bullshit" because even though the "AI" (yes, they used that terminology) was definitely listening to their audio, and definitely doing something, it didn't know the audio equivalent of "cats have four legs". It was such a catastrophe that Izotope spent a bunch of money scrubbing the Internet of any mention of "Total Mix." You won't find any record of it now - in part because RME's had a product called "Totalmix" for 20 years (nice job Izotope) and in part because mostly what AI is doing these days is data poisoning. And really, Izotope now has a number of garbage products they sell to neophytes - Vea, Nectar, Neutron, Tonal Balance Control and Neoverb are all "AI" products designed to make your dogshit amateur production sound less dogshit. And they do! They make your dogshit sound less dogshit. But they don't make it sound good. Izotope, wisely, still sells real tools. They're expensive, they're complicated and you know what? They are fucking chockablock with AI. I've been using RX for more than 20 years now and the stuff it can do is spooky. But it won't do any of that spooky shit for you because you don't know what you're doing. You could learn? You could get as good at it as I am! But you'd have to put in the time, and then you'd want to be paid. And then we'd be right back where we started. Look. Let's say a robot can do 99% of my job. Let's say you spent $50k on a commercial with absolutely no humans in it. Let's say you're competing against an ad agency that you know has a human who gets a thousand dollars to do an audio polish. Let's be honest - you're going to pay me a thousand dollars. Because I can get you that last one percent that keeps you from losing your next contract. Machines have been displacing human workers since the mutherfucking plow, dude. The skills change and so does the work. I tell you what, though - an Amish dude with a team of horses is always going to kick my ass in a corn-growing contest no matter how bitchin' my tractor 'cuz the Amish dude? Knows a thing or two about growing corn. Me? I'm gonna google "how do you grow corn" and try and figure out which of five contradictory snippets I should pay attention to. I'm fukt. It's just a tool. It's feared by people who don't understand tools, and people who understand what happens when you let people do whatever they want with tools.And I say this as an apex predator in a field that has already experienced an "AI-like" mass extinction event: there are far fewer professional mixers now than there were ten years ago but not because AI can do it, but because the massive proliferation of untalented executives who don't understand post-production made everyone read their television. If you don't need it to actually sound good, you've been able to do it at your house since shortly after Nirvana's "Nevermind" came out. If you need someone to pay for it, I'm right here with $30k worth of Pro Tools.
Nobody thought that computers would mean the death of stores, until they enabled people to shop from home and get it delivered.
Robots were never supposed to replace workers in restaurants, except now even mid scale restaurants have discovered that it much cheaper to put a Wi-Fi enabled iPad on the table than pay a human to take your order.
AI is taking over a lot of office jobs now too.
I mean we NEED mailroom staff, because all the people who work in offices started in the mailroom (in the 1980s) except now there hasn’t been a mailroom since 1990s because people realized that they could reduce their labor costs by using emails instead of inter office memos hand delivered by humans.
VFX artists have been using AI tools for 20 years or more. Any artist who didn't have to hand-trace a rotoscope line has been using AI in one form or another. I recognize I'm the only person here who knows what "rotoscope" means which is part of the problem - my posse has been doing cutting-edge shit since college because if you wanna see rapid adoption, check out filmed entertainment. If you look at AI-generated content the obvious place to use it is backgrounds. Mattes have been effectively gone since the early-mid '90s because computers have been able to generate plenty-good-enough backgrounds. AI makes that cheaper which mostly means that the guys who are doing backgrounds are going to do more of them. Look. It's gonna play out like this. Here, sit with me for a few minutes: That took Kerry Conran, talented Cal Arts grad, dedicated cineaste, four fucking years to make: Worked out tho 'cuz after four years he finished "chapter 1", a friend got it in front of Jon Avnet and four years and $70m after that, the world got: HERE IS WHAT AI IS GOING TO DO It's not gonna take four years grinding on your own to make Chapter 1 of Sky Captain. It's going to take months or weeks. The skills you use to trick the AI are going to be novel and they will be successful. It will be impressive and those of us who grew up with Steenbecks will marvel. But it's still gonna take tens of millions of dollars, Jude Law and Angelina Jolie to make it into a movie. Because a bunch of amateurs are always going to be slain by a bunch of professionals. Period. Full stop. No discussion. And that's the stupidest bullshit about this whole kerfuffle - everyone's all "ZOMFG I can't imagine how threatened some hypothetical professional must feel about this" because they can't imagine some hypothetical professional ANYWAY. Trust me - if you make your living doing visual FX, you're eagerly watching all this AI bullshit to see if it's capable of giving you a tool to speed up your workflows. And so far, what you see is something that doesn't care how many kings there are in a game of chess and if you look deeper, you're troubled by the fact that none of the people selling this technology sense that's a problem.He could not afford better equipment, so he used equipment given him in payment for projects that he worked on, such as desktop publishing of articles. His computer (including the equipment he earned) was outdated and slow. He dropped out of society, and spent all of his free time creating the short, working only enough to support himself and his project. He later remarked that he "had no life", and would sometimes hide under his desk in a fetal position, feeling tempted to give up on his project.
> I recognize I'm the only person here who knows what "rotoscope" means which is part of the problem Nah, man. Everybody who played Prince of Persia on the Apple 2E remembers rotoscoping!
Except that in almost every instance where a profession has been automated, that’s exactly what happened. Having a computer that keeps track of your inventory makes the workflow better for the logistics department, and then using a computer to schedule deliveries makes that part easier as well. And you keep doing that and eventually you’re doing the work of twelve professionals and your team shrinks down to 1/12th of what it was. And then you chip away at those tasks until you halve the workforce again, and eventually the computer is doing all of those tasks and the people who used to do those things are obsolete. Then they go back to school hoping to find a training program where they can make money before AI takes those jobs too.
Bitch I've got four computers and eight screens in front of me and the only thing that has changed since the era of magnetic tape is I can do more, faster, with less. I can't say that any simpler. You would have no more idea what I'm doing now than you would in the era of magnetic tape because I'm a professional with professional tools. I can't say that any simpler either. There's this assumption that if the tools get better the budget will shrink and that simply Does not Happen.
But surely there's way more logistics and shipping being done now in the age of computers than there was before. I'm young but I still remember a time before Amazon. I think you're imagining the one exact thing the computer is now doing being the totality of the job, whereas Klein (i assume) is talking about the industry as a whole, which generally increase in scope as it becomes cheaper easier and more prevalent.
Oh yeah totally. Fwiw, I'm fairly into watching behind the scenes vids and have tried learning blender a few times, so while I don't dare call myself a beginner I a least know what rotoscoping and mattes are. And if your job was Just those, I'd be worried. I don't think most VFX artists are though ofc. The AI I see being useful for someone who actually cares about quality are the ones that speed up things already being done - rotoscoping like you said, inpainting, photogommetry, all the places AI is already being used that maybe the new techniques can do better. The NERF stuff in particular I think could be big- turning many simultaneous video recordings into a 3D scene, so the camera can be repositioned after the fact. Nobody besides a handful of nerds want to watch ugly stock footage stitched together with ChatGPT writing the story lol.
Here's the TRUE issue: 1) LLMs lose money whenever you use them. 2) ChatGPT plus is $20 a month. Midjourney is $10 or $60 a month. Copilot is $30 a month. Stable Diffiusion is $9 or $49 a month. 3) Photoshop is $23 a month. Premiere is $23 a month. Animate is $23 a month. Audition is $23 a month. All of them combined is $60 a month. 4) Adobe Stock is $30 a month. Fundamentally, "make me an image that might have too many toes that might just be a bad rip-off of a license-protected product" is consumer-cost-competitive with "find me an image that was created by humans under crystal-clear licensing terms." And fundamentally, "draw a fuzzy monster that is either kneeling or squatting, I don't care" is more expensive than "here is an absolute bazooka of a content tool in any medium you care to work in." And that is why none of this shit is being sold to professionals - it's nowhere near the costs-benefits breakpoint where they'd consider it. You know what fucking sucks about being a creative professional? You're surrounded by other creative professionals who are so fucking egotistical that they're 100% certain they're a creative genius while you're a button pusher. They'll slave away for weeks on something visual and then when it gets to the audio their every instruction is "no more like this. no more like that. No do it more like that. Can't you just give me your sessions and teach me how to use your software you're clearly a fucking idiot oh oops did I say that out loud?" I "worked" with this guy Jesse - friend of a friend - who was a graphics guy on Jimmy Kimmel. He wanted a sound effect for something - I think it was a brain ray zapping Bryan Cranston or some shit for half a second in a 2-minute throwaway bit before his interview. So I spent 20 minutes coming up with a brain ray zapping sound effect. Mutherfucker called me during lunch and left a seven minute message about all the changes he wanted. I noped out and said "sorry, Jesse, no bid" and the only award his short film ever got? Was for sound. That I did. It's fuckin' awesome. It's a werewolf in wrestling gear painted gold for some reason. But the idea that I might know what I'm doing is absolutely fucking unthinkable to a certain segment of creative. All this AI bullshit is for that guy. The dipshit who prefers to shout at other professionals rather than trust them, who has no respect for the expertise of others, who can't fucking wrap their head around the idea that art requires artists. And they don't have enough money to support it. Fuckin' every AI company out there is losing money at prices that make Creative Cloud look like a bargain and their solution is to ask for 10% of global GDP to fix the problem.
LOL I've been following a few AI artists for a couple years now. They're all really clear about the fact that what they're doing is a wholly different process than traditional pixel-pushing, with different inputs, different outputs and different happy little accidents. I am honestly and enthusiastically supportive of the use of AI by creative professionals, and I am honestly and enthusiastically supportive of the use of AI by amateurs. Every time the tools get better the world improves. The tedious thing for me is that the techbros REALLY want to make this about the death of the professional class and there's absolutely zero fucking evidence to even have the discussion. It comes back to that fucking storyboard girl. Yay, you paid $10 a month to get a bunch of dragon pictures that may or may not be associated with a "movie" you intend to make someday. You weren't about to pay a storyboardist anyway, nor were you about to even try to get vaguely good at it. I've got buddies who make $2k a day storyboarding. I also shoveled about $600 into Frameforge. Between Frameforge, Photoshop and ComicLife I got a half-dozen pages into a graphic novel; it's a lot of fuckin' work. And A) Microsoft Pilot Girl is NEVER putting in that effort B) No aspect of Microsoft Pilot, or any AI for that matter, reduces that effort in any meaningful way.
It's visual and obvious, dude. The 1x dog is a nightmare dog, the 4x dog is a fuzzy dog, the 16x dog is a less-fuzzy dog. But the 16x cat still has occasional spurious limbs. It's obvious that the 16x cat is a sparkly cinematic 4k-lookin' cat but there's nothing in the model to demonstrate that a 64x cat is any less likely to pop an extra leg every now and then. Photorealistic renders of things that can't exist have been a staple since Deep Dream and what's clear is that the cost-per-pixel is linear while the quality-of-massed-pixels hasn't changed appreciably. Further, that accuracy isn't even a consideration - "close-up of a short furry monster kneeling" is of a short furry monster squatting and "can it tell the difference between kneeling and squatting" is NOT a throw-away problem. More than that, it's clearly not a focus of development.
I'd go with 'trivial' or 'left as an exercise for the reader'. You're right, though. Generators seem to be less able to remove 'turbulence' from the output, but rather move it someplace else within it and hope for the best. Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw, but the clavicle-to-armpit areas look inexplicably like Munch's melted cheese period.It's visual and obvious, dude.
And I think this is key. It's stupid to argue these problems won't be fixed. Give it a year and it'll pull off handsome faces without string cheese anatomy. But who's using that You're using it for atmosphere and ambience around something where you would have simply done without. You weren't about to pay a human to draw those characters. This is very much like my own use of AI - "Hey Midjourney give me a picture of 'Fear and Loathing in Enumclaw' to share with five friends." One of those friends tried to get Microsoft Copilot to give him a logo for his studio; they were all awful. Three or four of us pointed out that he could get up on Fiverr and do infinitely better. Is that the argument, ultimately? That AI will do a better job than Fiverr? ...cuz... it's more expensive than Fiverr. It should. And also everyone on Fiverr is going to be hella better at using AI to get you what you want than you are. The tools are always going to have shortcomings, all tools do. Professionals learn how to work around those shortcomings to do a better job faster. To me? Much of this discussion is "ZOMG nail guns are going to put framing carpenters out of business."Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw,
I'm not arguing those problems won't go away, or that it's any more or less than a tool. You can give me that much I hope. And you're right that I wouldn't pay a human for those, at least unless those would be recurring NPCs or something like that. I do commission background sets regularly because 1) the free/cheap/generated ones are usually on par with what I can make, 2) what I can make suffers a severe pizazz deficiency. Lotsa bang for a buck, too.
Yeah the best advice in nearly any endeavor is "hire the best expert you can afford and do what they tell you" and if you are paying artists for a campaign that is fuckin' awesome. No shade intended. The business model of all these AI companies, on the other hand, is "get people who would never pay experts to pay us because they don't believe in expertise."
If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists. Maybe the number of framing carpenters stays the same, but they're upping output, building houses quicker, and a proportionate rise in wage is doubtful, or at least atypical. The builders and real estate investors profit even more, hurrah! Even if this all never becomes a Thing, I think it'd be cool to have a university or public-funded LLM unleashed on everything public domain and voluntarily (lawfully) donated libraries and content. Do you think it'd be worth it? lol now I'm imagining a Trump admin. procedure for "expertise codebase corrections", governing what is allowed to be input when like the executive branch LLM is allowed to assimilate feedback from expert-level critique. -- FORECASTING -- EDUCATION -- GLOBAL WARMING INTEGRATION -- DR. NAKITOSHA, TOKYO UNIV. -- BILL ACKMAN, BILL ACKMAN -- SAM ALTMAN, MULTI-TRILLIONAIRE -- DONALD TRUMP, LORD -- VIRTUAL SHARPIE, DONALD TRUMP -- NUCLEAR BOMB, U.S./DONALD TRUMP k back to reality. If something good gets put on iPhone, that could be the push towards mass adoption that'd matter. People would get used to using it. Apple's, of course, already way in deep with it financially, too, but hasn't deployed much of anything yet. Self-driving cars? Too hard of a problem to solve, especially without privatizing infrastructure. NFTs? Right-click "save as" for the digital, seek state-enforceable means of ownership for the physical. Crypto? I have a debit card. This? The only issue I can see is what you've already NAILED, mr. framing carpenter, the legal field. But I still think big parts of this stuff are going to make it into our lives. (Already has, to a degree. The TikTok algo is probably the most successful implementation so far, financially.) Obviously I don't mean only Sora or images, but stuff like accruing or building any type of content hyper-shaped to your tastes, learning a new language, making it code for you, or, apparently, for some people, falling in love with an algorithm and feeling devastated when you're locked out of your profile or your hard drive crashes. Oof, hey, if you wanna watch it fail, you could try to have it teach you how to play an instrument. That would be content. "LLM, please write a story about a man who asked an LLM to teach him how to play an instrument, but was met with extreme failure." This was pretty good, even without the twist, but my wife called it about halfway into the thing: "They probably made ChatGPT write the ChatGPT episode". Yup. They did. I really do think people will use this on a massive scale, and pretty quickly. Some jobs will be lost, and some jobs will be created. Not terribly sure how much of each. PURPOSE OF CODEBASE CORRECTIONS
-- TO ASSIST PROGRAM WITH TOP-LEVEL ASSESSMENTS OF HURRICANE SCIENCE AND PREDICTIONS
APPLICATION
-- EMERGENCY ALERT INSTRUCTION
PARTICIPANTS
-- DR. WILLIAMSON, UNIV. FL
and there it is. Fundamentally, everyone in a capitalist society is a capitalist, either voluntarily or involuntarily. I agree fully - tools can definitely be used to the advantage of one social class over another. We have no newspapers, for example (middle class) because of the annihiliation of classified ads (lower class). Farming is concentrated (upper class) because of the mechanization of individual agriculture (lower class). But going "this tool is the problem" is an utter and total waste of time if what you're trying to do is protect society. Are LLMs plagiarism machines? Mos def. Are they useful without plagiarism? Prolly not. Do we have mechanisms in place to protect against plagiarism? Hell yeah all that has to happen is for the techbros to learn they're not above the law. yet when I say "it's all plagiarism" what I get, EVERYWHERE, is "no no man it's fuckkn eldritch magic that will doom us all."If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists.
We'll tell our grandchildren "we used to make our own handsome faces". I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more. But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein. Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs. Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math. Like being able to publish something novel. I think an LLM's best chance would be going the experimental route, sifting through public-domain data and finding something the existing literature had missed. It might have the hardest time doing some of the hand-wavey stuff theorists do to get analytic results, when you need a deeeeeep understanding of exactly what the maths represent, or the motivation for using a certain approach or approximation, etc. Anyway, I hope you are well. :)
True or false: image creation is an area in which you have practice and expertise. See, you're going "everyone is an idiot but me." Stop that. It's because if you want the fake to work it has to be carefully crafted to not stretch credulity. "Huh, look at all the Taylor Swift nudes! I wonder if any of them are real!" -no one Here's my gremlin opinion: Microsoft funds OpenAI because they KNOW it's poisoning Google. Example: We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left." This involves a web search - and it's a perfect web search for AI. It's content nobody really cares about, driven by a large mass media exposure with a long tail (the episodes aired in 2012). Now - check this out. That's an AI-generated website. It's also the top hit for something on Hotel Hell. If you dig into any of the blogs dedicated to "where are they now" reality TV updates you learn the place closed in 2020. If you look on Trip Advisor, you see that the last review was in 2020. But if you look on Facebook, Yelp, Kayak or anywhere else, there's a link farmer with a phone number and an email address who totally doesn't have a hotel but will absolutely take your credit card number! Bing's results aren't much better, but then, Microsoft doesn't make their money from search and never will so fuck search. LLMs have no deep understanding, so they'll never come for anything that requires deep understanding. Shit, LLMs have no understanding. How many legs does an ant have? how many pawns on a chess board? These are the constraints that hobble an LLM, they don't make them better, so they're never going to grok that shit. If you need something that knows how many fingers hands should have, you need something other than an LLM.I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more.
But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein.
Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs.
Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.
Kind of. Learned Photoshop in high school, messed with Illustrator recently. I script command line image manipulation (ImageMagick) and video (ffmpeg). I'm only artsy enough to upset my Christian mother sometimes. I think you're asking about that, specifically, and no, I'm not the best painter, sculptor, drawer, logo-designer, or whatever. Sora's doing wayyyyyy better than me. Kinda, but I'm an idiot too. Just hopefully not about this. You've posted stuff yourself that shows how quick so many people are to be fooled by some AI images. People are busy. They're in a hurry. Oh this is absolutely true. My wife and I have literally done this for years. And Kitchen Nightmares, too. The website scam is pretty solid, there's gonna be a lot of that. It's already illegal, I'm sure, and companies should get in big trouble if their LLM is an accessory to fraud. The litigation surrounding stuff that's in a more morally gray area will be thrilling, I'm sure. One way or another. I understand. Ha no, but it's kinda the ol' "magic is science we don't understand yet" thing. If it's passing the Turing test, it will feel intelligent. Indistinguishable, most of the time. It's easier and more productive to talk to online than at least half of America. And you can just photoshop out the extra digits and save yourself potentially hours upon hours of time without having to synthesize too much, my dude. Again, yeah, legal stuff's gotta get sorted, but this tech is mos def my bet for most disruptive in this generation. Like a 15-year span. It'll be: cell phones -> internet -> social media -> LLMs. Wish I could tell you what I thought was next. Would if I could.True or false: image creation is an area in which you have practice and expertise.
See, you're going "everyone is an idiot but me."
Microsoft funds OpenAI because they KNOW it's poisoning Google.
We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left."
LLMs have no understanding.
FUN FACT: The Turing test was about "can you tell if I'm gay" not "can you tell if I'm a robot." It's like that goddamn Potter Stewart quote - you throw it in my face it reveals that you've found a platitude to model your understanding on, not a theory.
Cyrodiil's Jesus! No, I tried generating something a touch less 4chan-does-Amnesia and more Balkan Romani without the perpetually disappointed look. Theory is much less about hand-waving connections between deeply understood parts and more about doing the math with as little preconceived ideas as possible. Don't imagine what atom/potential/sun is, calculate and interpret what comes out, see if anyone tested something similar / calculated it in a similar regime. Propose an experiment, try to make a feedback loop with someone (or something) that'd bounce ideas back. It's everything else that ought to be automated, 'cause the amount of paperwork they try (underline: try) to pile on me is just fucking ludicrous. The problem is that models aren't better at determining they're wrong than humans, and are unlikely to learn it since their very nature is numerical bias. And, frankly, LLM/models/AI/whatever should have less of a problem replacing philosophy, because doing proper math requires pencils, paper and a wastepaper basket for wrong ideas... whereas philosophers seem to only ever need the first two. Otherwise, I kinda stopped paying attention to anything that isn't directly related to my interests tbh. Seems like everyone is losing their shit over anything and everything in the news/work/word holes, while I'm tackling the deeper mysteries of is it better to keep seeing someone with a 3-year-old and see where it leads or cut it loose before things get difficult for the kid moreso than us. Same to you. We gotta do some meetup. I wanted to organize one in January, but my health took a dip, maybe it's time to try again.We'll tell our grandchildren "we used to make our own handsome faces".
Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.
Anyway, I hope you are well. :)
Agree, the complexity put into making sure conclusions are correct-ish is going to be hard to replicate. Philosophy deserves every burn. Sorry. But only a little. We use machine learning pretty commonly now in my field. It's been harder for some of the older folks to grasp exactly how it works. But yeah, an algo isn't going to drive it right or understand the shortcomings. Not sure why you'd want a middleman, either. I'm not losing my shit, no worries. Well, kinda. I'm always at least kinda losing my shit, though. And hey, kids are.. a lot... but I will say, men of much less resourcefulness than yourself have found fulfillment in adopting a kid. I struggle with patience, personally. I'll try to make the meetup, but my schedule really clears up in mid April.
Eh, I'm being my usual exaggerated dismissive, but it's sad that the two most visible to me camps are essentially "it's only so unbiasedly rational of us to consider how many AGI could dance on the needle's head" and "mathless/IFLS quantum vibes" types. It's not even that I don't see the merits of those two, let alone philosophy at large, but that I have absolutely no fucking interest in either yet they keep talking at me like I'm a lobotomite for not caring. And no, wasoxygen, I'm not calling you out specifically, it's just how you Yudkowsky-ites communicate. We're cool, I hope. Well, ML/whatever excels at finding patterns, even if it can't/won't explain them. Having a tool that goes "exploring these parameter spaces is most likely worthless" or even "isn't it funny how second order solitons only form when this parameter is divisible by 17?" may be invaluable to a right person who can find context to those observations. That's the "(or something)" in my previous comment. Tying this to "making sure conclusions are correct-ish is going to be hard to replicate." <- that's the bottleneck as far as I can see. First you have to separate seeds from chaff, and then make sure those seeds aren't blighty or cleverly disguised angry bears. I wouldn't mind science becoming (even more) akin to computer-assisted chess, though. Tools are tools, experts use tools better, so that checks out too. Wasn't singling you out here, though I hope you take care of yourself and wife. And it's not like I don't understand or lack the presence of mind to understand why people are so agitated. I simply can't keep dealing with it. It's been two goddamned years, and I can't even force myself to go to Ukraine anymore. I haven't seen the worst, and it's too much. Focusing on what I can affect has to be enough for me right now. As to meetups: no worries, I can make another one in April or May. They're about as informal as flip-flops anyway.Philosophy deserves every burn. Sorry. But only a little.
Not sure why you'd want a middleman, either.
Losing shits and meetups
Since I'm like public journaling now instead of just allowing thoughts to pass through my head without any reinforcement and then showing up to hubski like "oh, I don't have anything", I'll give an example. "If I tried to LLM at work". There's a global model of the magnetosphere and surrounding solar wind environment that I run through a public website. I query the model for a certain day or time that I want (step 1). Wait a few days, then I look through the results and do the science (step 2). For step 1, there is no benefit in having a program input the date and time with a few choices that I make for which sub-components of the magnetosphere model I want to use, because it takes about five minutes. For step 2, the way that I look through the data requires an entire methodology in which I'm using outputs from the model to re-input back into the next time-step for visualization. I'm tracing magnetic field lines through time/space and the magnetosphere as it convects (I've automated it using a python webcrawler and maths to produce a movie). The idea that I could simply ask an LLM to do this is pretty funny. It's so specialized that I can guarantee it would fail immensely to know wtf I meant when I said "take the results from this model run and show me a movie of magnetospheric convection. I want bundles of magnetic field lines that pass through the reconnection site near satellite XYZ emphasized". I think the amount of additional information I would need to feed it for the thing to even come close is infinite, because it's probably never going to give me something good. More on that below. But let's say that it does. It's the game of "how do I know it's right?" again. I've gotta inspect all of the code that it wrote to do it, and I can guarantee that it's gonna be an implementation that's a way different structure than mine. I'm going to put in so much effort checking it that I'm not going to save an iota of time. OK, so I have my video, one way or the other. I can now look through it and do the actual science, linking it into an analysis of data from that satellite. There is simply no fucking way that any LLM or AGI on the foreseeable horizon could do this. Doing the science means comparing the new m'sphere model outputs to the existing data analysis, linking new interesting/publishable physics of the two, discussing how this is different or similar to previous studies, and thinking about how the results can be applied towards the next step. It requires a deep understanding of how this contributes to the field. This is at least approaching ASI territory. Furthermore, for the science, the LLM or whatever it is has no interest in images. It only cares only about model outputs. It would actually have to perform the conjugate of what I have to, and take the images from previous movies of magnetosphere convection and put them into a form for comparison with the magnetosphere model output data. The whatever it is will have to know how to transform the data into formats suitable for comparison, and then it'll have to have correctly ingested the publishing record to form a pseudo-understanding of everything. Can't imagine the lengths it would have to go to output something like "we can see that if the only difference is a Y-component reversal of the upstream magnetic field in the solar wind, the reconnection site moves southward towards the spacecraft, because the X-line is shifting to accommodate cusp reconnection relocating from the cusps on the dawn and north and dusk and south quadrants to the dawn, south and dusk, north quadrants, respectively". Would the Whatever know that it'd be good to run the magnetosphere model I used for the period of time used in the previous study, which used a completely separate m'sphere model, to factor in the differences between the two models that might explain the behavior instead? Does it know that it's important to comment on the distance from the satellite to the reconnection site? Is the data analysis conclusion that the satellite is at a reconnection site actually wrong? Are there shortcomings in the m'sphere model that help explain why the m'sphere model's reconnection site differs from where we actually found it? It's obviously not advisable to expect this inside of two or several decades. Maybe it could build me a movie, but I doubt it. Unless I am guaranteed a running instance of my efforts to coach it is preserved and always available should I achieve a successful/correct movie once, or that any new pseudo-understanding I had to lead it to is properly assimilated into the root system, there's no reason to even begin trying. Correct me if I'm wrong, but that's not something publicly available yet, and I can see massive hurdles to it ever happening. lol, what am I gonna say? "That's right! You finally did it. Now, don't forget how to do this the next time I ask, I don't want to have to spend another seven months filling in the gaps in your understanding of this again"? Hahahha "Filling in gaps of understanding" deserves a dissection, because it's more general, not just for physics or science, but for anything. The process looks like hell. Because, like we've said, the LLM doesn't know what's "correct", it's not going to ask you any substantive questions. It's going to output what it outputs, and you'll have to look at the outputs, and tell it why it's wrong. Iteratively. Having it fix one thing could break another. It could even infinitely diverge instead of ever converging on the solution you want it to. This all assumes that you know what you're looking for, what "right" means. And then, even if it does get things right, yeah, unless you work at the company that owns the LLM, it's all forgotten when you close the instance. Job security. Job security for all!
I had a discussion with an old buddy about LLMs yesterday. He's writing fiction and is using ChatGPT like a rented mule. He's got a character who's modeled on Andrew Tate but he wants him to be annoying, not a villain, so he'll type "give me ten things a sexist asshole would say about women that aren't awful." He's got a character who's a vampire so he'll type "give me a list of insults a vampire would use against townsfolk." Or he'll be analyzing plot points and he'll say "give me a list of movie scenes that would radically change the movie if they were absent." In each one he goes through and picks what he likes. In the last one he argues with it. I pointed out that he's basically using ChatGPT like an extended thesaurus and he agreed. I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant. But - I pointed out - when you ask it for an opinion it will fall down every time because it has absolutely no handles on any of its inputs and outputs. You can't ask it to tell you what scenes are crucial because it has no understanding of any of the concepts underneath. What it has is a diet of forum posts that it will never give you straight. Shall we play "how can chatGPT do my job?" 'cuz they've been trying to AI automate my job forever. See this guy? they were about $1500 back in '94. And what they do is analyze the audio signal passing through them looking for feedback, and then they drop one of eight filters on it. You can adjust the sensitivity to feedback, you can adjust the latch, you can adjust the release, you can adjust the aggressiveness. They were really big until about 2005 or so when it became cheap and easy to TEF sweep a room and ring it out to EQ out the frequencies that cause things to ring - I'm sitting here surrounded by ten speakers at 85dB and having spent an afternoon mapping and collating and inserting between 4 and 15 filters each channel I can't get feedback if I hold a condenser in front of left main. Could an AI have done that? fuck yeah. That would have been delightful. But not without me moving the mic sixty times so what time am I actually saving? That active seeking feedback reduciton thing has made it into machine tools - each servopak on my mill has more filters than that Sabine. And in general, the approach everyone takes is "set as many as you need to kill steady-state, use the roaming ones carefully" because who knows what modes you'll run into with this or that chunk of aluminum strapped down getting chewed up. Everything I've got is already a waveform. We've been using Fourier transforms to operate on them for 40 years. My life is nothing but math. And despite the fact that GraceNote has literally released every song they know about as training data, telling the AI "make my mix sound better" still fucking failwhales. Like, on a basic, simple level. It understands what the sonogram of a song should sound like but that's like reconstructing a fetus from an ultrasound. What you get is uncanny valley nightmare fuel. I don't need the mediocre middle of a million mixes, I need excellence. And excellence comes from humans because it is, by definition, not the mean. Anyone expecting that a machine purpose-built to give you a statistical average can give you only the good outliers is going to be disappointed for the simple fact that the machine doesn't understand "good" or "bad" it understands "highly rated" or "much engaged with." The machine thinks this is the best Jurassic Park cover ever made: And the only way you can deal with that is to nerf it out on a case-by-case basis. You could argue that LLMs are good for facts but not opinions but the problem is its method for handling facts only works for opinions. Are they useful? Yes. Are they a tool that will make big changes to a few industries? I don't see how they can't. Am I honestly excited to see their actual utility? You damn betcha. But where the world is now is this: People who don't understand AI inflicting it on people who don't need AI to the detriment of people who don't want AI. That's it. That's the game.
Ahh, of course, the feedback thing. I don't do anything live, so I can just get away with a pretty simple gate and headphones. No chance of loops. Hadn't really thought about how I would suppress feedback loops without killing the channel or at least lowering the volume. But now I completely get it. I got really close to connecting the dots a long time ago when I suggested basically TEF in a convo with you a few years back. My mistake was thinking about mixing. I was thinking about minimizing phase cancellations as a function of frequencies. But duh: My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate, and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too. We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys), and so we notched the input frequency spectral energy around the normal modes so we don't overdrive the thing during vibe testing. And then we shake it with the launch environment, a white-noise spectrum, still modestly notched around the normal mode frequencies (which might have needed slight readjustments from the sine sweep results). By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with. I'd guess it was for Saturn V's, hahah, but I don't know! Didn't get the story. (edit: ohhhhh, I think it might've been for cleaning, especially considering that it was being kept in one of the anterooms bordering a clean room. They must be using the thing to knock any loose particles off of equipment or instruments with sound. We did the same thing with an ultrasonic bath after de-greasing parts with trychloride, before the final isopropyl wipe down. They'd soundblast it after that. Probably a pretty clean room.) Which has its uses, heh, though perhaps mostly uncommercializable. Absolutely agree. The LLM is navigating topological features inside a parameter space. With boundaries, and curvature, yeah. It's what I'm doing for the magnetosphere, actually. Same kind of idea. Except with I dunno maybe a billion axes instead of the four I use. But yeah, sometimes if you move just a little bit in the parameter space from where you started last time, or you start off in a slightly different direction, the topology might map to some drastically different places. Occasionally they will conjoin into beauty. AISI; artificial idiot savant intelligence. Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed. We don't, agreed. I only want it for selfish reasons. And I only want it if I can feel assured it isn't going to cripple society. So I don't want it. Nvm. Feels like we're all getting a better handle on the level of complexity to expect though. It'll change. Hopefully not too fast, this has apparently been jarring enough for the world already, but AGI in two years? I just don't think so, and I'm 100% sure that ASI isn't only three years out.What you get is uncanny valley nightmare fuel
I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant.
...people who don't need AI...
that sounds so fucking awesome Well what you're doing is ringing out the frequency response, right? You're trying to find constructive modes that are going to fuck you over while strapped in a rocket. You do that with an equalizer if it's sound or filters if it's an electromechanical system. I've linked this before, the eldritch magic starts at 3:35: For the record the last time I used ANSYS it was a command-line program that ran on a DEC Alpha. that sounds so fucking awesome You are grossly underestimating the ease with which bad mixes can be produced. The computer music cats have been doing "generative music" for a long time. It's easy as shit and doesn't require an LLM. Most of them are some form of neural network somewhere; "random ambient generator" has been an off-the-shelf product category for 20 years. Here's a free plugin for Kontakt. Here's a walk-through for Ableton.My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate,
and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too.
We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys)
By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with.
Which has its uses, heh, though perhaps mostly uncommercializable.
Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed.
Absolutely. The normal modes. As it goes, first is the worst, second is the best, third is the one with the treasure chest. Sometimes it's "hairy chest", depends on the elementary school. When people use generative stuff in music well, it's noted. One of the most ridiculous arpeggio parts ever was made with Omnisphere's arpeggiator and then meticulously adapted for guitar. Probably took a little bit of practice (the rest of my life, in my case).Well what you're doing is ringing out the frequency response, right?
Dunno, probably not, but I think you could instantiate one that can when they can and freeze its learned ability, so the whole hoping it doesn't forget might go away. But I have no idea. Don't write that much code or work with raw data these days, so bibliographic aid is just about all it can do for me in an hour of need. Otherwise, it's about as tangential to my goings-on as it can get. When I tried that 'explain paper' site, it left enough of a distaste for me to roll eyes and move past. Between absolutely fucking insisting that some unrelated mathematical concept[0] is absolutely crucial to explain my question and rephrasing a circular argument until I got bored and left, I probably won't bother again for quite a while. Unfortunately, the above experience mean I'm unlikely to trust LLMs with stuff I don't know a lot about. Also, I kinda regret writing anything in this thread and will probably just add more tags to my ignored list. Fun company notwithstanding - too much hassle, too few fucks left. [0] - I wrote and deleted 900 word footnote of jargon about orbits of the coadjoint representation groups and operators in de Sitter space, so let's pretend I said Tits index and wiggled my eyebrows in an amusing way.Correct me if I'm wrong
That is the only way to fly, in my opinion, and we haven't discussed this much (edit: well nah we kinda have), but people aren't going to use it like that, obviously. Don't blame you for any filterings. I kinda like livening up this place. It's LLM season on hubski, baby. But one last quick story! I'm a couple miles from home standing in line to order a burger (probably in flip flops again) and a guy gets in the to-go line. Says "Order for so-and-so", and the cashier checks the order tickets. Nothin'. He says "I called such and such number". She refers to some post-its behind her, and sees that it's the other branch across town that he called and ordered from. He then says "watch", pulls up his phone, and goes "Siri, call Restaurant X on Street Y" (where we are), and it was replicable, it dialed the other branch again. He goes "so it's not my fault. I should get some food for free, I already paid". And I think he did. And he cut everyone in line. I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration. It's gonna be a fun time.... I'm unlikely to trust LLMs with stuff I don't know a lot about.
When every foodhole in Warsaw connected with delivery service overnight, outgoing orders had much much higher priority. So, during pandemic, you had a crowd of deliverers, normal line that moved at snail's pace, and a nearby crowd of people who placed their orders in an app to game the system. This lead to a situation where people from the last group placed order to <restaurant's address> and added comments like "I'm the one wearing a brown hat with a gigantic pompom" or "I'm already behind you." Insert something about follies of idiots with access technology. I don't know, I barely slept since Friday.I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration.
Yeah. Gonna be a lot of LLM Florida man stories. Same. But I do like checking back in here when I hit a roadblock at work. It's synergistic. Good luck with your coming week. Mine's gonna be crunch time, but I think I'm almost ready. Peaceeeeebarely slept since Friday
I could honestly benefit from an involved re-visiting of philosophy, but it doesn't really feel terribly necessary, all things considered, at the moment. This is my field's IFLS. Except not, because it's just flat out wrong, as opposed to a flavorful interpretation of quantum mechanics. No worries, fam is good. I'll periodically re-enter an "oh shit, it's fascism!" check-in phase, but I try to keep it Stoic more of the time, these days. Like, I'm not chanting the serenity prayer, just wishing for the same thing more often than I used to. OH, that reminds me: For your consideration, I'd like to submit the most American thing ever done, possibly. I wore my flip-flops to the McDonald's in downtown Bern, Switzerland, while unknowingly incubating covid that I'd gotten on the plane ride. A homeless woman outside goes "PLUGH, l'Americaine!", and all I could do was think to myself "I know, right?".They're about as informal as flip-flops anyway.