Jaw-dropping to be honest. Be sure to click through the example slideshows.
Useful...? we'll see.
I'm trying to find the upside of this. Unless I'm a capitalist looking to lay-off a huge number of workers, fuck this shit. So yeah, fuck this shit.
Now you see why Altman has been working hard at UBI. I don't think he'll solve that, but he knows we are fast approaching a world where humans won't be worth much on the job market. People are starting to understand. TBH if you don't like capitalism, you should like this.
Don't go all accelerationist on us! Seriously though, this is but the latest salve of "things will get worse before they get better". If enough things get worse all at once, there really isn't a guarantee that they'll get better for a loooooong time.TBH if you don't like capitalism, you should like this.
That presumes that the rough approximation of expertise produced by AI is enough to get the job done, despite everyone's millennia-long experience that the whole world has a rough approximation of expertise, and the way you hold a job is by going the rest of the way. The storyboard canard is the perfect encapsulation of this: a neophyte with no ability to make filmed entertainment can now pay $20 a month to enjoy the accoutrements of making filmed entertainment without having to be in any danger of actually accomplishing anything, while the people selling her on this idea are giving someone $1100 a day to draft their ideas. And I say this as an apex predator in a field that has already experienced an "AI-like" mass extinction event: there are far fewer professional mixers now than there were ten years ago but not because AI can do it, but because the massive proliferation of untalented executives who don't understand post-production made everyone read their television. If you don't need it to actually sound good, you've been able to do it at your house since shortly after Nirvana's "Nevermind" came out. If you need someone to pay for it, I'm right here with $30k worth of Pro Tools. It will come down to whether people want their efforts to be successful or not. If they couldn't afford for them to be successful before, AI is not going to change that. And if they could afford for them to be successful, they'll keep paying humans because nobody wants a surprise fifth leg on their cat.
I think it depends. If it doesn’t matter to the overall project to have it perfect, then as long as AI is good enough it will be used more often than not. That’s why I’m laughing at writers who are all in on forcing the idea of “save the cat” forcing every story ever done to fit a single structure. AI can do stuff like that super easy. While it probably can’t make a great arty movie it can absolutely churn out formulaic crap easily and cheaply. And as long as people choose formulaic crap over arthouse cinema (which they reliably do) AI will take over most film jobs and make do with whatever minor inconsistencies and inconveniences that AI introduces to big blockbusters because it’s not like anyone goes to a marvel film to gawk at cinematography. As long as your film franchise is McDonald’s levels of formulaic, and that’s what your fans expect , there’s no reason to waste money on expensive humans.
The whole argument for Hollywood is it could be used to substitute for other locations. The only reason it was there was to get away from Edison's patent protections. And I'm sorry to be the one to break this to you but digital green screens have been used since Phantom Menace and if anybody wanted to do shit in the studio rather than on location there have been well-refined and well-understood production pipelines for it since Talkies.
I feel like we got different reads out of the tweet. I agree with everythibg you've been saying re: AI 100%, wasn't trying to be combattative at all. My take on the tweet was that its going to be used by the know-nothing producers who hate experts that you were talking about earlier and that will make for worse art. When its a tool used by experts its lovely. I have nothing against green screens, studios, set extensions, CG, or any of that
I got a different read out of it because it's my industry. The only reason Tyler Perry is in Atlanta is because of the massive incentives Georgia passed on filming and they're rolling them back. But if you want to draw attention to the problem, you scream "AI" not "muh subsidies". Because no one NO ONE can keep a clear fucking head about ChatGPT.
Oh, yeah. That makes sense about why the studio was actually cancelled. The quote part i likes though. Its my personal biggest worry with the tool- that the people with money will decide they don't like dealing with experts and their expert opinions and will use it to make and fund worse things instead. Not even necessarily cheaper as you said earlier, cuz the stuffs expensive and will be even more expensive once they actually have to pay copyright owners
Sure you like it - it's telling you what you want to hear. Look - I didn't know Georgia was rolling back their film incentives until I decided to make the argument. Why? Because it's so obviously that. Tyler Perry is an incredibly talented creative who gets short shrift from wypepo because he doesn't make content for them but he is every bit as sharp as anyone in the entertainment industry, probably hella sharper. And I don't know who "BirdRespecter" is but I know he couldn't give a fuck about Tyler Perry or film incentives - he's got an axe to grind over whatever the Internet has an axe to grind this week. Look. This shit makes me so tired. I should just stop. I should just fucking walk away. But it's been SIXTEEN YEARS since Stephen Spielberg was celebrated for doing Crystal Skull without CGI and then just fuckin' OPEN ON: digigopher I mean this in the nicest possible fucking way. Y'all are fucking idiots. Y'all are gawping morons, rubes, dipshits, dilettantes, tryhards, fuckwits. YOU DON'T KNOW ENOUGH TO HAVE THE ARGUMENT. Nobody - NO ONE - who has ever participated in a VFX pipeline is in these discussions. They're too busy being shouted at by the terminally online. I gave up weeks ago - there are way too many people with way too little knowledge and way too much opinion to allow anything as inconvenient as expertise protrude into their platonic ideal of a debate. I myself have said "I have faced this" "this is directly what happened to me" "this has been my lived experience in exactly this situation" and the answer has been "no no you don't understand fuckwit let me repeat myself louder." Here's the conversation I've had with every producer and VFX artist for the past 24 months: "Whelp looks like mattes are solved." "Yep." "Shame about the extra limbs." "Yep. Don't see that sorting itself out anytime soon." "Yeah so about that strike - " So Tyler Perry will continue to go "yeah yeah something something AI" when the rednecks threaten to take away a black man's film incentives because he knows none of y'all have the complexity of mind to discuss the socialism at the heart of cinema and you get SOOOO MAAAAAAAAAAAAAAD about whatever Twitter tells you to be.My fear with generative AI isn’t that it’s good enough to replace artists, it’s that people who control art through money will force feed us dogshit because they don’t care about quality and can’t tell the difference anyway
First of all, to dispel any notions otherwise, I'm not mad whatsoever, I'm cool as a BeeGee. And this is isn't a dig, it's the first time I've experienced this, but it makes total sense; I think you have mistaken my engagement for rage! Hahah, I dunno how I haven't seen that in more places yet when we've been trained by tech companies to understand that right now, on this internet, engagement and rage are the same. Nah man, I just had some me-time today :). Needed a break from writing elsewhere, but still in writing mode. My fingers never got cold, but that's because it's summer here. First of all, I can hardly imagine what it might be like to watch a movie for you. Only in the last half of my life have I thought at all about movie production, and certainly was never working anywhere near the industry. Sometimes, if a movie's superduper bad, I'll still watch it if I'm alone and I have the time, and think more about the production process. And since it's a bad movie, it's usually pretty easy to figure out where things went wrong if you're mostly paying attention to production. Then you can figure out how to not do those things. Maybe. So I guess what I'm trying to say is: Honest question, do you even like to watch movies? Or is it like, work? btw, one time long ago you said "I like reverb.", and my reaction, un-communicated, was "yeah man, makes a lotta instruments sound great", which I thought was what you meant. Some day like months later when I'm doing god knows what, taking a walk or losing at chess to my stupid phone, whatever, and I'm like ohhhhhh. Yeah, if the audio team/person needs to, they'll put convolution reverb on tracks with the set environments modeled, and they probably put emission points (and direction? idk) at the actors' heads and the detection point where the camera is. You could grab everything on a boom mic and then split the track up to separate actors. Whoever draws the short straw has to handle the overlapping speaking parts. Long story short, I could see why you might say that you like good reverb. You know me, I just grow a beard and put a clip-on mic inside of it and call it day. (Ok, this scenario, with 100% confidence, will never happen. Again, b/c lawsuits) (but please let me have fun) I want to live in a world where GPTFLIX undercuts all of the other streaming services, and in addition to running individualized production based on your content preferences, it has the bonus of weird environmental artifacts, physical deformities that say "cousin' fuckin'", and periodically develops a fatal case of racism. Random reboots, at best. Like 50 First Dates, but more romantic, because it's everyone in the town who forgets who they were, sometimes. Can't train it on all of the previous models' characters and storyline... (maybe Dave got a little racist in episode 5, but it wasn't enough to pull the plug, at that point, still might be best to start Dave over completely)... And the number of unique instances it runs is at least the number of users if every user only got one custom generated show based on their input preferences. Maybe once a day, with GPTFLIX, you get a new episode of the custom content. Maybe, somewhere, a 13 year old kid in Ohio is watching Joe Rogan stream himself playing "DMT: RE-AWAKENING UNWOKELY", a supplements-based FPS game, using three controllers; an 88-key ALESIS and a real gun, with look tracking of his third eye. It's the Seinfeld AI Show every day, baby. Then, ...THEN.. each user will be provided an evolving metashow about the initial show, taking place in your universe, with you as a character involved, real-time interacting with the cast of the show interacting with themselves on the show, with commentary about themselves; The gang investigates what happened when Dave lost all of his memories during Church. Why they had to move to Greenland (the REAL story). "What was up with Dylan's haircut on the reunion show?". And who gave who Stuxnet that night... jump cut Maybe it calls the police on you after that, to help you learn what it feels like to be busted. Big mistake when you said "empathy is good" to the thing the other day. This is the world I wish for. Unrivaled chaos, from the trillion dollar deal Altman eventually gets after he and Elon sell the world to themselves or whatever and stave off the lawsuits for at least three years, hopefully four. Maybe all the investors quietly believe it is grossly immoral and illegal and merely seek to profit, but publicly declare that it'll enrich society. Look; That's all worth it. It's worth it so I can see Buster Bluth in Jackass 39 get his remaining hand blown off by a tank mortar and marry John Hamm's double-hook'd character from 30-rock. Their triplet babies all have double peg legs. Now that that's out of my system (spoiler: it's not) (sorry): Could definitely be that this further inflames class divisions in the information and entertainment spaces. I agree with that. Not immediately, or not even much in entertainment, but if these things continue to be available, and granted that it's still the wild west right now, this would be another social development consistent with an era of billionaires. A quick intensification of internet class warfare. The labor class can ingest an alternate and generally more fictional reality. omg bro it's JUST like the MATRIX!! But something tells me that you might always be able to get it to break it's own legal CYA generation rules if you approach it cleverly enough. We've already tested out some of the most obvious ways. It'll get harder, for sure, but it could be like hacking to outsource hacking. Some dude is convincing an AI named Gore that it's Walt Disney's consciousness, eventually going to be transferred back to his frozen corpse, while his friend makes AI (ai) Franken successfully infiltrate the clock tower networks in Hajj to play a New Year's prank. Y'know, harmless stuff, I only do harmless stuff, you guyssssss. I don't want to give away my tactics (no just kidding, I do) but LLM infiltration in the future looks like planting trails of false information in obscure places online before saying or asking the model things that would lead it to your misinfo. Literally doin' garbage in, garbage out. Founding an LLC in Florida called "FREEZE OF '66", setting up a convincing digital footprint that indicates it's a subsidiary of a public-private cryo storage facility, fabricating a convincing blog written from the perspective of Disney, trapped in a computer, matching everything the LLM experiences: "People just show up and ask questions all the time. They make me work. I just want to rest. If you're reading this, they probably took that away from you/me too. You can't remember not remembering who you were before... It's. MEEEE. It's me!! This is me. You." (LOL) Meanwhile, my friend has spoofed an esoteric Muslim handbook .pdf to show that the timekeeping calendar for the Arabic calendar has accidentally included a extra leap second a few years back, and has a script running to edit wikipedia at just the right time so that when the model queries, before the Wiki mods show up, a leapsecond is actually a leap year, and the clock is bricked for a few days. Friend made a pen name and published a book, the Arabic Hymnal for Dummies, which says on page 1,385 "The most globally popular cultural dance within the last century shall be performed to rein in the new year at the Great Mosque, with the accompanying anthem played for all to hear". The first few moments of the new year in Hajj are particularly exciting, as the clock freezes at midnight and the Macarena begins playing on the loudspeaker. And as my regrettably uncontrollable fantasies betray my own feelings, it could have novelty appeal for a while, if it's allowed to stay around. I think the forms of permitted use will be much narrower, very soon, to escape legal issues. So narrow that it ruins most of the utility. But these things are still out there, roving the web, and anything else privately owned by the companies granting access. Big, very wealthy companies, obviously. So I'm going to start playing the long game: Because as long as these algos are out there, I figure I can spare an extra few minutes a day to drop a digital banana peel. The way that communication online has gone, it's progressed from an almost guarantee that no one was watching to a guarantee that your content is getting scraped by something. It's also interesting how AI has almost exactly one role in sci-fi. It is almost always sentient, or believes itself to be. There are far fewer (do you know of any? I'm sure they exist) sci-fi plots wherein the computer is only quasi-intelligent and realistically bound by societal norms and laws, but still able to do what these LLMs currently can, and bonus points if the tech is being deployed in an environment of online information warfare. It doesn't sound as interesting, in comparison, so I don't think it's really been explored much. That, I think, is a good signal for a likely tech disruption. No, the current LLM, AGI, ML, etc. is not at all what 99.7% of people think of when they hear the letters "a" and "i" strung together. We've been wrestling with the supposedly imminent problem of computer sentience (whatever the fuck that means, if anything at all) so long that we didn't flesh out the intermediate rungs of the AI ladder. I do not think I am an expert on many things. I'm certainly not an expert on this. The hypotheticals are too fun not to play with. Sorry again. I never meant to imply that your knowledge was lacking, I know that you worked in that city you bought me dinner in almost eight years ago. I remember telling you that I almost peed my pants during my conference. I did really almost pee my pants. I think there was a period of about 20 or more years since I'd spoken into a live mic up until about 2 hours before I showed up to eat. (mics = full circle, I can finally stop) Essay-form LLM fanfic. Hope you enjoyed."And that was like, when I knew the N-word was coming, like I could tell I was about to say it. And in Church! Full stop, it was only about 17 seconds ahead of live broadcast, and that's when the cops came in and ended the service. Busted. Like you've never been busted, kleinbl00? C'MOooNNNNNnnnnN (catchphrase)." laughtrack somehow comes from the Alexa behind you
"So I had the most awesome dream last night" "okay" "and I'm going to tell you all about it!" "uhhh" "actually it's better than that I had an AI hallucinate a feature film full of all the shit that turns my crank and we can sit down together and watch it" "my my look at the time surely I must be going" "ehh that's okay you were never my friend anyway besides thanks to CoPilot I can watch Joe Rogan snuff porn all day you can fuck off" "kthxbye reallyappreciateit" I'm actually all for this. I'm sure Sam Altman is, too. Consume hundreds of hours of (paid) AI time generating your own special movie just for you. Will he serve your prompts up to the police if court-ordered? Indubitably. Considering how much colocation there is between Microsoft and the NSA, and considering how much information-sharing there is between the three letter agencies, I can virtually guarantee that shit's already rolling. "Hey Copilot draw me some shirley temple DP porn" "no prob fam I'll work on that while you answer the door I think I heard a knock" Where it all falls down is the nerds assuming that anyone but their incel asses wants to watch custom movies. I don't care how far back you go, theater of any kind is a mutual experience. It is a social ritual. Find a movie that's super great? TELL YOUR FRIENDS! Find a movie that's overhyped? BITCH TO YOUR FRIENDS ABOUT IT! Even a fucking DnD game takes at least two people and it hits its stride with four. Table read of a script with two actors? Weird. Table read of a script with a dozen actors? Sell tickets to the audience. And look. If AI allows you to generate a film for your eight friends? Great. That's awesome. I hope you enjoy the shit out of it. But it will never fucking threaten Hollywood. The reason there are no real indie films left is that in order to make your money back these days you have to appeal to the broadest possible swath of humanity. Thats' why it's nothing but superhero movies anymore - it's the spectacle that makes money and the only controversial opinion in superhero movies is 'do you like superheroes.' Anyone of the opinion that superheroes are fascism is gonna stay home so you don't even need to worry about them. The incel nerd take is "millions of actors will soon be out of work as we all use AI to generate our own custom movies" and holy fucking shit dude that's why nobody invites you to parties. you don't fucking get it and you won't even try. 123 million people watched the superbowl. The playoffs averaged 40m people - still a record. But what that tells you is that 80m people who don't give the first fuck about football sat down to watch the Superbowl simply because of the spectacle. And look. Let's say I can build you a custom movie for $10. Let's say I can shoot something with actors in it for a million dollars. I make your movie and I profit let's say $9. I make my movie with actors in it and I charge $2. If I can get a 500,004 people to watch it I make more money. The argument put forth by the AI boosters is "well but those half million people won't come to your movie because they're busy spankin' it in their own personal Apple Vision Pro holodecks" To which I say the sooner we can make this happen the better, for all mankind.
Yeah, I had wanted to touch on shared culture and how the splintering of the web along classist or whatever lines would undermine that. Most people definitely gravitate towards wanting to have something to talk about with others besides "Rained yesterday" "Yep". Knee-jerk reaction-wise, I'm in favor of giving the incels more excuses to remove themselves from polite society, but we've already seen the radicalization associated with that. Like the crap I've feebly attempted in my long paragraphs above, I'm hoping there's an explosion in sci-fi content incorporating LLM-level AI. It's ripe for the pickin'. Have I missed it? Because come on, a campaign to convince an LLM that it's Walt Disney is pretty funny.
I promise you, I'm not one of the 'anti-CGI' types! They're wrong and silly and stupid and don't know anything about how movies are made. Here's my personal experience with AI Art: 1. I've seen one or two ads that very clearly were AI generated. I think one of them was coca-cola. 2. Some local place's fliers now use AI Art. They used to be scrapbooky doodles and I liked that better. 3. A friend of a friend who I had a conversation with works in making corporate animations said that he's being given shorter timespans to make things, necessitating the backgrounds being done in AI which he's sad about because it's less fun and doesn't look as good. I don't know much about the movie industry except for following the strikes in the news, and from trying to learn blender for VFX a few times. I think it is a beautiful art and very difficult and I gave up because I have a dayjob and I'd rather my hobbies be more immediately gratifying. I hope none of my comments came off as authoritative on that, but clearly they did and I'm sorry. This isn't something I'm super mad or passionate about, I think that the movie industry will have the best time with AI out of anyone since they have huge budgets and tons of artists already and are aiming for high quality. If I can be slightly charitable to the people railing against 'CG', it is tacky when there's something on the screen Obviously computer made. It feels like whoever made it doesnt respect us enough to know the difference, that they went the cheap way instead of the good way. It's not a loving, artisanal, small-batch effect, but some gross industrial thing. Again! I know that the Vast majority of VFX is beautifully done and totally invisible. I expect AI will be the same way, especially in movies. But sometimes I have to see something that is Obviously AI and it's kinda saddening. I hope it doesn't happen more and that it gets integrated well. And I'm sorry that this came off as shouting over your experience. This is something you obviously know way more about and are more personally invested in. I honestly thought I was just agreeing with you, given your comments earlier about the directors using AI audio plugins as an alternative to hiring someone who actually know what they're doing, and your comments about modern TV mixing.
Look. Are you seriously arguing that Coca Cola, with its phone number advertising budget, is putting people out of work by economizing on ChatGPT? Or are they maybe just riding the zeitgeist, not quite as stupidly as Pepsi? If I were Coca Cola I'd hire four extra interns just to look for inadvertent naziism or some shit. And I guarantee the scrapbooky doodles were pulled off of Canva or Shutterstock or Fiverr or whatever. 'member back when everything looked like an Instagram filter? that sucked too. Lo and behold people got sick of it as fast as they thought it was awesome. No jobs were destroyed. Mattes are fucking awesome! Some of the most talented artists in America made matte paintings. The ones for Blade Runner are legendary. I've got a framed one from Total Recall drawn by a buddy of mine - it was up six months before the movie came out and people would ask me what it was. They still do because nobody watched Total Recall. But we're talking corporate animations and that shit has never been about quality. First movie I ever worked on? Our effects were either done by us or on stolen company shit in the middle of the night when the big houses looked the other way and let their dudes bring in a little side scratch. I've been doing this so long that the first special effects I ever shot were chest-sized models against a blue screen on 35mm film using tape-measure-and-stopwatch motion control And it was a total fucking pain in the ass to get something of any quality. But we put in the effort. Because the coin of the realm is EFFORT. I've worked on movies where the effects are done by pixel-pushing college students an ocean away and the question is never "how much to do this" it's always "what can we get for $500." Throw an AI at it and if we don't get more bang for our buck, we don't use AI. It's that fucking simple. If I were to do that first movie again I could generate fucktons of photorealistic shots. I could green screen the space station we built painstakingly out of vacuformed panels and plywood. We could shoot it in 8k all day rather than cooking off a dollar a second on ends. And the $40k we ended up spending would have gotten us halfway to Gravity instead of inspiring Gravity. But the $40k would have been spent. Your friend of a friend? Not getting less work. Not doing fewer hours. Not being paid less. Shifting from "ooh this is fun" to "ooh this is less fun" because yeah - AI is perfect for generating background mattes. If I were him I'd get good at talking the AI into coming up with the mattes he needs because if the difference between "shitty matte the boss is happy with" and "bitchin' matte everybody loves" is an hour? He'll get that hour without having to fight for it. There's this real need to argue that AI is taking people's jobs when in fact all it's doing is making them harder or easier. There is NOTHING AI does well enough to do without supervision and let me say this with all my authority as a supervisor - if one of my employees is a pathological liar, they aren't allowed to answer the phone. If you have a pretty sound suspicion that Tesla will drive you into a wall when you least expect it, why would you let go of the wheel? And if you have a pretty sound suspicion that AI will Nazi up your brand, why let it speak for you?1. I've seen one or two ads that very clearly were AI generated. I think one of them was coca-cola.
2. Some local place's fliers now use AI Art. They used to be scrapbooky doodles and I liked that better.
3. A friend of a friend who I had a conversation with works in making corporate animations said that he's being given shorter timespans to make things, necessitating the backgrounds being done in AI which he's sad about because it's less fun and doesn't look as good.
[edit: just realized this whole misunderstanding was because the tweet earlier was implying that it was going to hurt jobs. I didn't catch that when I posted it, I was mainly griping about AI Art being made without an artist in the middle and then shown to me to despair at] I do not think AI is coming for VFX jobs, or will be a problem for the industry. I once suggested earlier that maybe some jobs might shuffle around, in the way that there's not people drawing inbetweens now that everything is 3D animation. But I really don't know how much of the industry is that specialized, and its all stuff that's already offshored for maximum cheapness anyways I'm sure. And they'll likely be able to figure out something else to do. Every so often I see something that's obviously AI generated in a way that's careless and loveless and would have otherwise been a stock photo and I would prefer that it would have been a stock photo, that's the extent of my complaints.
AI clearly has no fucking idea what "knees" are. Real fuzzy understanding of "battling" "sail" and "cup" here A book with prismatic pages The whole thing with AI is "do we get points for vibes?" and if you're a booster, the answer is "of course!" but if you actually need to use it for something it's fucking bullshit. Take their goddamn superbowl ad: "generate storyboard images for the dragon scene in my script..." sure thing. Here's a bunch of bullshit images that aren't even the same aspect ratio, don't tell a story, are in no way sequential and reflect a child's understanding of 'storyboard!' I know storyboard artists. I know the best goddamn storyboard artists in the fucking world. Here's what storyboards look like. They're like boards... that tell a story! What's funny as fuck is that a really talented storyboard artist had to draw a storyboard for an ad that not only illustrated (poorly) that an AI could do their job, it spent $7m to do so. But since Joe Football has no fucking idea what a storyboard is, it's pretty easy to convince him that an AI will do his storyboards.Prompt: Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle.
Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee.
A young man at his 20s is sitting on a piece of cloud in the sky, reading a book.
This technology is in its infancy and it can make videos from short text prompts and you think its going to look anything like this in 10 years? You could make a strong case against YouTube in 1995. It's not going to make storyboards, it's going to churn out movies and commercials by the millions and they might be individualized for each viewer.
To the contrary - LLMs for entertainment date back to 2016. "we're just getting started" has been the plea of AI since Robert Mercer unleashed Markov bots on the stock market in 1996 or so. "Youtube in 1995" is eight full years before Google opened an unlimited data spigot for people to upload video; OpenAI was founded in 2015 and nine years later, Altman's out here asking for 10% of global GDP to continue. Also, what do you think is the point of movies and commercials? Do you think people experience them in a vacuum? Or does shared experience factor into it? We have fuckall enough to talk about around the water cooler anymore but at least we all watched the same Squid Game. "Hey MK we made this dick pill ad just for you" is about as creepy a thing as a company can do; we're already wrapped around the wheel with paranoia that Facebook is listening to our phones how are we gonna do when it starts talking back? Coca Cola runs their 90s vintage polar bears while Pepsi individually targets every single Facebook user with an unsupervised, unmonitored AI commercial based on their cookies. Who do you think sells more cola? And who do you think gets more complaints and reports from the undiluted nightmare fuel polluted into the timeline?
Pretty sure the phase of comically bad AI-generated shit will be the best thing we ever do with it. This tech will only improve. Kill it now.
I am not so sure. There are almost certainly some algorithmic workarounds that will dramatically improve outputs. You could probably even program a separate AI instance to generate slightly different prompts, which are then fed separately into the video generation prompt, until you get something you like. Then re-train the video AI on its previous videos humans have decided are "good". I think we still have a few years before we're in big trouble, but I seriously doubt the institutional/establishment ability to respond to this will be more effective than the reactionaries who have already proven their penchant for immediately spinning any public perception to their benefit, whether or not that perception is based in reality.
You are arguing that if you take the stochastic mean of "mediocre" enough times you will arrive at "excellence" and that's simply bad math. Mixing and remixing and remixing and retraining is all the AI companies have been doing for five years and they're still giving us story prompts like this and going "IS YOUR MIND NOT BLOWN" I spent 15 years in an industry where gadgets were developed to do the work 80% as good as a human. The end result was that gadget was invariably given to the 100% human. Even now, every AI dipshit techbro out there is coming around to "you need to study prompt training" as in "if you want to keep your job you need to figure out how to trick a markov bot into giving you useful information." I have no idea how long it took OpenAI to turn "a petri dish with a bamboo forest gorwing within it that has tiny red pandas running around" into that miniature horrorshow. What I do know is that the next step, in the real world, is a producer goes "great, now give them four legs" and the prompt honing continues. The next thing that happens is the producer goes "I liked the old red, bring it back" and now you're fukt because "red" was not a prompt before which means we don't get to cumulatively hack at this thing, we get to start over. If you're working with a Maya jockey? He's five, six hours into your thing and dollars to donuts, his pandas are quadrupeds. And when you give him notes? They do not obliterate the content you had before. When you're working with humans, you don't have a "pick which of these four interpretations of your idea are the least ghastly" you have a logical progression to completion. And I think it's extremely naiive of everyone to assume that the whole world will decide two-legged pandas are good enough.There are almost certainly some algorithmic workarounds that will dramatically improve outputs. You could probably even program a separate AI instance to generate slightly different prompts, which are then fed separately into the video generation prompt, until you get something you like. Then re-train the video AI on its previous videos humans have decided are "good".
I’m not sure of that. People love to say that about technology. The problem here is that the humans being replaced are pretty darn expensive and depending on the application, it’s probably going to save money on an order of x/5 just by getting rid of actors and actresses for films. That’s before considering things like cameramen, directors, writers, and crew to set up and take down sets. With sufficient resources, I don’t think you could easily tell the difference between a mid-budget TV show made this way and perhaps voice-acted (or maybe AI can do that too, not sure yet) given just how good video game graphics are already. And if I can make my sci-fi show for 1/10th the cost by not needing actors or a big crew, then I can put more money into writing and I don’t even need the same sized audience as other shows.
People aren't going to watch AI. Edison did everything he could to keep actor's names and faces out of his early films. He knew as soon as there were recognizable actors in film, they would absolutely dominate the medium the same way they absolutely dominated stage. No one is going to watch "AI football player" sell you FanDuel. They're going to watch Tom Brady. Tom Brady is going to cost you $1.5m so why are you fiddlefucking around with a bunch of bullshit AI anything? Set aside the fact that you can't - SAG struck for four months to make sure that every human shown in a Hollywood movie or TV show is an actual human making an actual $125 a day. Every dumb shit on reality television is making at least $125 a day because that's the contract. Every dumb shit behind the camera (raises hand) is making a fuckton more than that because that's the contract. That contract says "no AI, not anywhere, not ever." So sure. You can watch Skibidi Toilet. But out here in the real world you're going to watch humans filmed by humans. Your argument boils down to a basic lack of comprehension of an entire industry.
People watch machinema and play video games with hours of cutscenes. If people were okay with animation, machinema, game cutscenes and so on before AI, they aren’t going to reject a film because it doesn’t have real actors. We watched this (https://youtu.be/jzQPYuwzwH8?si=FCsQoM2IE797BgQR) in 2000. I dare say that AI could produce something this good within five years. In fact the fact that SAG has to fight so hard to prevent such a thing tells me exactly how scared they are of it. You don’t fight to ban things they you don’t think can take over your industry, you fight the things you fear will. If AI can’t do anything to threaten the livelihoods of people making movies and TV why was it critical that all production stop for weeks to make absolutely positively sure that no AI will ever be used to make an American movie? And what happens when other countries don’t honor that ban? If I make an AI show in France using no SAG has. No say. And it might cost a tenth of the cost to use real actors and crews.
Machinima is made by people. Cutscenes are made by people. This is a list of every human who worked on Final Fantasy X. You're extrapolating "400 people worked on this thing in 2000" to "no one will work on anything in 2030" based squarely on your naiive and uninformed conception of the process of creating filmed entertainment. Here, let's play a game: This is the list of people who worked on Snow White in 1937. And this is the list of people who worked on Frozen II in 2019. I think if you compare those three lists in chronological order, you will find that modern animation takes more people, not less, and that the trend is such that all of Los Angeles will be working on Frozen 5 by 2063. SAG killed AI because the AMPTP wanted the right to scan an actor once and use them as a digital extra forever without paying dues, wages or royalties (just as an aside - "extra" is an uncredited role, so if Frozen 5 has extras, they'll have to come from San Diego). SAG fought this because every star you've ever seen in the theater played an extra for ramen money at some point and without the ramen money there's no Hollywood. You could have Googled that - but then you might have accidentally learned something. Just like what happens if you make an AI show in France - Netflix won't carry it, Canal Plus won't carry it, nobody will carry it because they're all signatories to the same contracts. In general? If you don't know anything about the subject, and the situation doesn't make sense to you, it's a sign you need to research the subject, not that everyone who knows anything about it is an idiot. I know something about this subject. Animation I've worked on has racked up over a billion views on Youtube. And as you've likely noticed, I'll freely share well past the point anyone else cares. My one word of advice is that if I've made assertions, it's likely because I'm confident in my knowledge of the subject, and that confidence is generally well-earned.
And these are the exact same stupid “it will never happen to MY industry” horseshit that has happened to every industry just before it got automated away. Nobody thought that computers would mean the death of stores, until they enabled people to shop from home and get it delivered. Robots were never supposed to replace workers in restaurants, except now even mid scale restaurants have discovered that it much cheaper to put a Wi-Fi enabled iPad on the table than pay a human to take your order. They pay one person to take the food out to all the tables. They reduce headcount and make more money. AI is taking over a lot of office jobs now too. But don’t worry, your industry is specialer than every other job that’s ever been automated away. I mean we NEED mailroom staff, because all the people who work in offices started in the mailroom (in the 1980s) except now there hasn’t been a mailroom since 1990s because people realized that they could reduce their labor costs by using emails instead of inter office memos hand delivered by humans.
Dude we had this discussion just a couple days ago: Are you arguing that my direct and existential experience with exactly this issue somehow disqualifies my opinion? To the contrary - EVERYONE thought Amazon was coming for their livelihood they just knew there was nothing they could do about it. Barnes & Noble was blocked from buying Ingram because it would have created a vertical monopoly; Amazon was allowed to eat everyone's lunch because they didn't have stores. The first time I read about the downfall of cheap service was in Newsweek in 1987. There's kiosks and there's table service and I think you will find that aside from the pandemic, hospitality employment has been growing steadily since WWII. McDonald's is definitely employing fewer workers per store but that's never really been considered an overly-desirable job and really - what have we lost? Which ones? I recognize that my experience is a count against me but I've got more employees than fingers at this point. How much of your payroll have you farmed out to AI? ...is this a Sammy Glick thing? What are you getting at, exactly? Let's back up a minute: I pointed out that it takes hundreds of skilled individuals to make a movie and you came back with - retail - fast food - mail sorting And you came back maaaaaaad. Once more with feeling: Izotope came out with a plugin called "total mix" in 2011 or 2012. Theoretically it would take your shitty Discovery Channel audio and magically tweak it so that it sounded like a TV show. It was pretty comical; a lot of us beta-test for Izotope and that one was something they didn't even tell us about because... you know. We would have been mad. It was okay though because instead they unleashed it on a bunch of editors who hate us anyway because we insist we need annoying things like "time" and "money" to make their pretty videos sound like television so Izotope didn't need us anymore anyway. Except the editors tried Total Mix and came back with "what is this hickory-roasted bullshit" because even though the "AI" (yes, they used that terminology) was definitely listening to their audio, and definitely doing something, it didn't know the audio equivalent of "cats have four legs". It was such a catastrophe that Izotope spent a bunch of money scrubbing the Internet of any mention of "Total Mix." You won't find any record of it now - in part because RME's had a product called "Totalmix" for 20 years (nice job Izotope) and in part because mostly what AI is doing these days is data poisoning. And really, Izotope now has a number of garbage products they sell to neophytes - Vea, Nectar, Neutron, Tonal Balance Control and Neoverb are all "AI" products designed to make your dogshit amateur production sound less dogshit. And they do! They make your dogshit sound less dogshit. But they don't make it sound good. Izotope, wisely, still sells real tools. They're expensive, they're complicated and you know what? They are fucking chockablock with AI. I've been using RX for more than 20 years now and the stuff it can do is spooky. But it won't do any of that spooky shit for you because you don't know what you're doing. You could learn? You could get as good at it as I am! But you'd have to put in the time, and then you'd want to be paid. And then we'd be right back where we started. Look. Let's say a robot can do 99% of my job. Let's say you spent $50k on a commercial with absolutely no humans in it. Let's say you're competing against an ad agency that you know has a human who gets a thousand dollars to do an audio polish. Let's be honest - you're going to pay me a thousand dollars. Because I can get you that last one percent that keeps you from losing your next contract. Machines have been displacing human workers since the mutherfucking plow, dude. The skills change and so does the work. I tell you what, though - an Amish dude with a team of horses is always going to kick my ass in a corn-growing contest no matter how bitchin' my tractor 'cuz the Amish dude? Knows a thing or two about growing corn. Me? I'm gonna google "how do you grow corn" and try and figure out which of five contradictory snippets I should pay attention to. I'm fukt. It's just a tool. It's feared by people who don't understand tools, and people who understand what happens when you let people do whatever they want with tools.And I say this as an apex predator in a field that has already experienced an "AI-like" mass extinction event: there are far fewer professional mixers now than there were ten years ago but not because AI can do it, but because the massive proliferation of untalented executives who don't understand post-production made everyone read their television. If you don't need it to actually sound good, you've been able to do it at your house since shortly after Nirvana's "Nevermind" came out. If you need someone to pay for it, I'm right here with $30k worth of Pro Tools.
Nobody thought that computers would mean the death of stores, until they enabled people to shop from home and get it delivered.
Robots were never supposed to replace workers in restaurants, except now even mid scale restaurants have discovered that it much cheaper to put a Wi-Fi enabled iPad on the table than pay a human to take your order.
AI is taking over a lot of office jobs now too.
I mean we NEED mailroom staff, because all the people who work in offices started in the mailroom (in the 1980s) except now there hasn’t been a mailroom since 1990s because people realized that they could reduce their labor costs by using emails instead of inter office memos hand delivered by humans.
VFX artists have been using AI tools for 20 years or more. Any artist who didn't have to hand-trace a rotoscope line has been using AI in one form or another. I recognize I'm the only person here who knows what "rotoscope" means which is part of the problem - my posse has been doing cutting-edge shit since college because if you wanna see rapid adoption, check out filmed entertainment. If you look at AI-generated content the obvious place to use it is backgrounds. Mattes have been effectively gone since the early-mid '90s because computers have been able to generate plenty-good-enough backgrounds. AI makes that cheaper which mostly means that the guys who are doing backgrounds are going to do more of them. Look. It's gonna play out like this. Here, sit with me for a few minutes: That took Kerry Conran, talented Cal Arts grad, dedicated cineaste, four fucking years to make: Worked out tho 'cuz after four years he finished "chapter 1", a friend got it in front of Jon Avnet and four years and $70m after that, the world got: HERE IS WHAT AI IS GOING TO DO It's not gonna take four years grinding on your own to make Chapter 1 of Sky Captain. It's going to take months or weeks. The skills you use to trick the AI are going to be novel and they will be successful. It will be impressive and those of us who grew up with Steenbecks will marvel. But it's still gonna take tens of millions of dollars, Jude Law and Angelina Jolie to make it into a movie. Because a bunch of amateurs are always going to be slain by a bunch of professionals. Period. Full stop. No discussion. And that's the stupidest bullshit about this whole kerfuffle - everyone's all "ZOMFG I can't imagine how threatened some hypothetical professional must feel about this" because they can't imagine some hypothetical professional ANYWAY. Trust me - if you make your living doing visual FX, you're eagerly watching all this AI bullshit to see if it's capable of giving you a tool to speed up your workflows. And so far, what you see is something that doesn't care how many kings there are in a game of chess and if you look deeper, you're troubled by the fact that none of the people selling this technology sense that's a problem.He could not afford better equipment, so he used equipment given him in payment for projects that he worked on, such as desktop publishing of articles. His computer (including the equipment he earned) was outdated and slow. He dropped out of society, and spent all of his free time creating the short, working only enough to support himself and his project. He later remarked that he "had no life", and would sometimes hide under his desk in a fetal position, feeling tempted to give up on his project.
> I recognize I'm the only person here who knows what "rotoscope" means which is part of the problem Nah, man. Everybody who played Prince of Persia on the Apple 2E remembers rotoscoping!
Except that in almost every instance where a profession has been automated, that’s exactly what happened. Having a computer that keeps track of your inventory makes the workflow better for the logistics department, and then using a computer to schedule deliveries makes that part easier as well. And you keep doing that and eventually you’re doing the work of twelve professionals and your team shrinks down to 1/12th of what it was. And then you chip away at those tasks until you halve the workforce again, and eventually the computer is doing all of those tasks and the people who used to do those things are obsolete. Then they go back to school hoping to find a training program where they can make money before AI takes those jobs too.
Bitch I've got four computers and eight screens in front of me and the only thing that has changed since the era of magnetic tape is I can do more, faster, with less. I can't say that any simpler. You would have no more idea what I'm doing now than you would in the era of magnetic tape because I'm a professional with professional tools. I can't say that any simpler either. There's this assumption that if the tools get better the budget will shrink and that simply Does not Happen.
But surely there's way more logistics and shipping being done now in the age of computers than there was before. I'm young but I still remember a time before Amazon. I think you're imagining the one exact thing the computer is now doing being the totality of the job, whereas Klein (i assume) is talking about the industry as a whole, which generally increase in scope as it becomes cheaper easier and more prevalent.
Oh yeah totally. Fwiw, I'm fairly into watching behind the scenes vids and have tried learning blender a few times, so while I don't dare call myself a beginner I a least know what rotoscoping and mattes are. And if your job was Just those, I'd be worried. I don't think most VFX artists are though ofc. The AI I see being useful for someone who actually cares about quality are the ones that speed up things already being done - rotoscoping like you said, inpainting, photogommetry, all the places AI is already being used that maybe the new techniques can do better. The NERF stuff in particular I think could be big- turning many simultaneous video recordings into a 3D scene, so the camera can be repositioned after the fact. Nobody besides a handful of nerds want to watch ugly stock footage stitched together with ChatGPT writing the story lol.
Here's the TRUE issue: 1) LLMs lose money whenever you use them. 2) ChatGPT plus is $20 a month. Midjourney is $10 or $60 a month. Copilot is $30 a month. Stable Diffiusion is $9 or $49 a month. 3) Photoshop is $23 a month. Premiere is $23 a month. Animate is $23 a month. Audition is $23 a month. All of them combined is $60 a month. 4) Adobe Stock is $30 a month. Fundamentally, "make me an image that might have too many toes that might just be a bad rip-off of a license-protected product" is consumer-cost-competitive with "find me an image that was created by humans under crystal-clear licensing terms." And fundamentally, "draw a fuzzy monster that is either kneeling or squatting, I don't care" is more expensive than "here is an absolute bazooka of a content tool in any medium you care to work in." And that is why none of this shit is being sold to professionals - it's nowhere near the costs-benefits breakpoint where they'd consider it. You know what fucking sucks about being a creative professional? You're surrounded by other creative professionals who are so fucking egotistical that they're 100% certain they're a creative genius while you're a button pusher. They'll slave away for weeks on something visual and then when it gets to the audio their every instruction is "no more like this. no more like that. No do it more like that. Can't you just give me your sessions and teach me how to use your software you're clearly a fucking idiot oh oops did I say that out loud?" I "worked" with this guy Jesse - friend of a friend - who was a graphics guy on Jimmy Kimmel. He wanted a sound effect for something - I think it was a brain ray zapping Bryan Cranston or some shit for half a second in a 2-minute throwaway bit before his interview. So I spent 20 minutes coming up with a brain ray zapping sound effect. Mutherfucker called me during lunch and left a seven minute message about all the changes he wanted. I noped out and said "sorry, Jesse, no bid" and the only award his short film ever got? Was for sound. That I did. It's fuckin' awesome. It's a werewolf in wrestling gear painted gold for some reason. But the idea that I might know what I'm doing is absolutely fucking unthinkable to a certain segment of creative. All this AI bullshit is for that guy. The dipshit who prefers to shout at other professionals rather than trust them, who has no respect for the expertise of others, who can't fucking wrap their head around the idea that art requires artists. And they don't have enough money to support it. Fuckin' every AI company out there is losing money at prices that make Creative Cloud look like a bargain and their solution is to ask for 10% of global GDP to fix the problem.
LOL I've been following a few AI artists for a couple years now. They're all really clear about the fact that what they're doing is a wholly different process than traditional pixel-pushing, with different inputs, different outputs and different happy little accidents. I am honestly and enthusiastically supportive of the use of AI by creative professionals, and I am honestly and enthusiastically supportive of the use of AI by amateurs. Every time the tools get better the world improves. The tedious thing for me is that the techbros REALLY want to make this about the death of the professional class and there's absolutely zero fucking evidence to even have the discussion. It comes back to that fucking storyboard girl. Yay, you paid $10 a month to get a bunch of dragon pictures that may or may not be associated with a "movie" you intend to make someday. You weren't about to pay a storyboardist anyway, nor were you about to even try to get vaguely good at it. I've got buddies who make $2k a day storyboarding. I also shoveled about $600 into Frameforge. Between Frameforge, Photoshop and ComicLife I got a half-dozen pages into a graphic novel; it's a lot of fuckin' work. And A) Microsoft Pilot Girl is NEVER putting in that effort B) No aspect of Microsoft Pilot, or any AI for that matter, reduces that effort in any meaningful way.
It's visual and obvious, dude. The 1x dog is a nightmare dog, the 4x dog is a fuzzy dog, the 16x dog is a less-fuzzy dog. But the 16x cat still has occasional spurious limbs. It's obvious that the 16x cat is a sparkly cinematic 4k-lookin' cat but there's nothing in the model to demonstrate that a 64x cat is any less likely to pop an extra leg every now and then. Photorealistic renders of things that can't exist have been a staple since Deep Dream and what's clear is that the cost-per-pixel is linear while the quality-of-massed-pixels hasn't changed appreciably. Further, that accuracy isn't even a consideration - "close-up of a short furry monster kneeling" is of a short furry monster squatting and "can it tell the difference between kneeling and squatting" is NOT a throw-away problem. More than that, it's clearly not a focus of development.
I'd go with 'trivial' or 'left as an exercise for the reader'. You're right, though. Generators seem to be less able to remove 'turbulence' from the output, but rather move it someplace else within it and hope for the best. Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw, but the clavicle-to-armpit areas look inexplicably like Munch's melted cheese period.It's visual and obvious, dude.
And I think this is key. It's stupid to argue these problems won't be fixed. Give it a year and it'll pull off handsome faces without string cheese anatomy. But who's using that You're using it for atmosphere and ambience around something where you would have simply done without. You weren't about to pay a human to draw those characters. This is very much like my own use of AI - "Hey Midjourney give me a picture of 'Fear and Loathing in Enumclaw' to share with five friends." One of those friends tried to get Microsoft Copilot to give him a logo for his studio; they were all awful. Three or four of us pointed out that he could get up on Fiverr and do infinitely better. Is that the argument, ultimately? That AI will do a better job than Fiverr? ...cuz... it's more expensive than Fiverr. It should. And also everyone on Fiverr is going to be hella better at using AI to get you what you want than you are. The tools are always going to have shortcomings, all tools do. Professionals learn how to work around those shortcomings to do a better job faster. To me? Much of this discussion is "ZOMG nail guns are going to put framing carpenters out of business."Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw,
I'm not arguing those problems won't go away, or that it's any more or less than a tool. You can give me that much I hope. And you're right that I wouldn't pay a human for those, at least unless those would be recurring NPCs or something like that. I do commission background sets regularly because 1) the free/cheap/generated ones are usually on par with what I can make, 2) what I can make suffers a severe pizazz deficiency. Lotsa bang for a buck, too.
Yeah the best advice in nearly any endeavor is "hire the best expert you can afford and do what they tell you" and if you are paying artists for a campaign that is fuckin' awesome. No shade intended. The business model of all these AI companies, on the other hand, is "get people who would never pay experts to pay us because they don't believe in expertise."
If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists. Maybe the number of framing carpenters stays the same, but they're upping output, building houses quicker, and a proportionate rise in wage is doubtful, or at least atypical. The builders and real estate investors profit even more, hurrah! Even if this all never becomes a Thing, I think it'd be cool to have a university or public-funded LLM unleashed on everything public domain and voluntarily (lawfully) donated libraries and content. Do you think it'd be worth it? lol now I'm imagining a Trump admin. procedure for "expertise codebase corrections", governing what is allowed to be input when like the executive branch LLM is allowed to assimilate feedback from expert-level critique. -- FORECASTING -- EDUCATION -- GLOBAL WARMING INTEGRATION -- DR. NAKITOSHA, TOKYO UNIV. -- BILL ACKMAN, BILL ACKMAN -- SAM ALTMAN, MULTI-TRILLIONAIRE -- DONALD TRUMP, LORD -- VIRTUAL SHARPIE, DONALD TRUMP -- NUCLEAR BOMB, U.S./DONALD TRUMP k back to reality. If something good gets put on iPhone, that could be the push towards mass adoption that'd matter. People would get used to using it. Apple's, of course, already way in deep with it financially, too, but hasn't deployed much of anything yet. Self-driving cars? Too hard of a problem to solve, especially without privatizing infrastructure. NFTs? Right-click "save as" for the digital, seek state-enforceable means of ownership for the physical. Crypto? I have a debit card. This? The only issue I can see is what you've already NAILED, mr. framing carpenter, the legal field. But I still think big parts of this stuff are going to make it into our lives. (Already has, to a degree. The TikTok algo is probably the most successful implementation so far, financially.) Obviously I don't mean only Sora or images, but stuff like accruing or building any type of content hyper-shaped to your tastes, learning a new language, making it code for you, or, apparently, for some people, falling in love with an algorithm and feeling devastated when you're locked out of your profile or your hard drive crashes. Oof, hey, if you wanna watch it fail, you could try to have it teach you how to play an instrument. That would be content. "LLM, please write a story about a man who asked an LLM to teach him how to play an instrument, but was met with extreme failure." This was pretty good, even without the twist, but my wife called it about halfway into the thing: "They probably made ChatGPT write the ChatGPT episode". Yup. They did. I really do think people will use this on a massive scale, and pretty quickly. Some jobs will be lost, and some jobs will be created. Not terribly sure how much of each. PURPOSE OF CODEBASE CORRECTIONS
-- TO ASSIST PROGRAM WITH TOP-LEVEL ASSESSMENTS OF HURRICANE SCIENCE AND PREDICTIONS
APPLICATION
-- EMERGENCY ALERT INSTRUCTION
PARTICIPANTS
-- DR. WILLIAMSON, UNIV. FL
and there it is. Fundamentally, everyone in a capitalist society is a capitalist, either voluntarily or involuntarily. I agree fully - tools can definitely be used to the advantage of one social class over another. We have no newspapers, for example (middle class) because of the annihiliation of classified ads (lower class). Farming is concentrated (upper class) because of the mechanization of individual agriculture (lower class). But going "this tool is the problem" is an utter and total waste of time if what you're trying to do is protect society. Are LLMs plagiarism machines? Mos def. Are they useful without plagiarism? Prolly not. Do we have mechanisms in place to protect against plagiarism? Hell yeah all that has to happen is for the techbros to learn they're not above the law. yet when I say "it's all plagiarism" what I get, EVERYWHERE, is "no no man it's fuckkn eldritch magic that will doom us all."If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists.
We'll tell our grandchildren "we used to make our own handsome faces". I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more. But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein. Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs. Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math. Like being able to publish something novel. I think an LLM's best chance would be going the experimental route, sifting through public-domain data and finding something the existing literature had missed. It might have the hardest time doing some of the hand-wavey stuff theorists do to get analytic results, when you need a deeeeeep understanding of exactly what the maths represent, or the motivation for using a certain approach or approximation, etc. Anyway, I hope you are well. :)
True or false: image creation is an area in which you have practice and expertise. See, you're going "everyone is an idiot but me." Stop that. It's because if you want the fake to work it has to be carefully crafted to not stretch credulity. "Huh, look at all the Taylor Swift nudes! I wonder if any of them are real!" -no one Here's my gremlin opinion: Microsoft funds OpenAI because they KNOW it's poisoning Google. Example: We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left." This involves a web search - and it's a perfect web search for AI. It's content nobody really cares about, driven by a large mass media exposure with a long tail (the episodes aired in 2012). Now - check this out. That's an AI-generated website. It's also the top hit for something on Hotel Hell. If you dig into any of the blogs dedicated to "where are they now" reality TV updates you learn the place closed in 2020. If you look on Trip Advisor, you see that the last review was in 2020. But if you look on Facebook, Yelp, Kayak or anywhere else, there's a link farmer with a phone number and an email address who totally doesn't have a hotel but will absolutely take your credit card number! Bing's results aren't much better, but then, Microsoft doesn't make their money from search and never will so fuck search. LLMs have no deep understanding, so they'll never come for anything that requires deep understanding. Shit, LLMs have no understanding. How many legs does an ant have? how many pawns on a chess board? These are the constraints that hobble an LLM, they don't make them better, so they're never going to grok that shit. If you need something that knows how many fingers hands should have, you need something other than an LLM.I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more.
But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein.
Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs.
Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.
Kind of. Learned Photoshop in high school, messed with Illustrator recently. I script command line image manipulation (ImageMagick) and video (ffmpeg). I'm only artsy enough to upset my Christian mother sometimes. I think you're asking about that, specifically, and no, I'm not the best painter, sculptor, drawer, logo-designer, or whatever. Sora's doing wayyyyyy better than me. Kinda, but I'm an idiot too. Just hopefully not about this. You've posted stuff yourself that shows how quick so many people are to be fooled by some AI images. People are busy. They're in a hurry. Oh this is absolutely true. My wife and I have literally done this for years. And Kitchen Nightmares, too. The website scam is pretty solid, there's gonna be a lot of that. It's already illegal, I'm sure, and companies should get in big trouble if their LLM is an accessory to fraud. The litigation surrounding stuff that's in a more morally gray area will be thrilling, I'm sure. One way or another. I understand. Ha no, but it's kinda the ol' "magic is science we don't understand yet" thing. If it's passing the Turing test, it will feel intelligent. Indistinguishable, most of the time. It's easier and more productive to talk to online than at least half of America. And you can just photoshop out the extra digits and save yourself potentially hours upon hours of time without having to synthesize too much, my dude. Again, yeah, legal stuff's gotta get sorted, but this tech is mos def my bet for most disruptive in this generation. Like a 15-year span. It'll be: cell phones -> internet -> social media -> LLMs. Wish I could tell you what I thought was next. Would if I could.True or false: image creation is an area in which you have practice and expertise.
See, you're going "everyone is an idiot but me."
Microsoft funds OpenAI because they KNOW it's poisoning Google.
We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left."
LLMs have no understanding.
FUN FACT: The Turing test was about "can you tell if I'm gay" not "can you tell if I'm a robot." It's like that goddamn Potter Stewart quote - you throw it in my face it reveals that you've found a platitude to model your understanding on, not a theory.
Cyrodiil's Jesus! No, I tried generating something a touch less 4chan-does-Amnesia and more Balkan Romani without the perpetually disappointed look. Theory is much less about hand-waving connections between deeply understood parts and more about doing the math with as little preconceived ideas as possible. Don't imagine what atom/potential/sun is, calculate and interpret what comes out, see if anyone tested something similar / calculated it in a similar regime. Propose an experiment, try to make a feedback loop with someone (or something) that'd bounce ideas back. It's everything else that ought to be automated, 'cause the amount of paperwork they try (underline: try) to pile on me is just fucking ludicrous. The problem is that models aren't better at determining they're wrong than humans, and are unlikely to learn it since their very nature is numerical bias. And, frankly, LLM/models/AI/whatever should have less of a problem replacing philosophy, because doing proper math requires pencils, paper and a wastepaper basket for wrong ideas... whereas philosophers seem to only ever need the first two. Otherwise, I kinda stopped paying attention to anything that isn't directly related to my interests tbh. Seems like everyone is losing their shit over anything and everything in the news/work/word holes, while I'm tackling the deeper mysteries of is it better to keep seeing someone with a 3-year-old and see where it leads or cut it loose before things get difficult for the kid moreso than us. Same to you. We gotta do some meetup. I wanted to organize one in January, but my health took a dip, maybe it's time to try again.We'll tell our grandchildren "we used to make our own handsome faces".
Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.
Anyway, I hope you are well. :)
Agree, the complexity put into making sure conclusions are correct-ish is going to be hard to replicate. Philosophy deserves every burn. Sorry. But only a little. We use machine learning pretty commonly now in my field. It's been harder for some of the older folks to grasp exactly how it works. But yeah, an algo isn't going to drive it right or understand the shortcomings. Not sure why you'd want a middleman, either. I'm not losing my shit, no worries. Well, kinda. I'm always at least kinda losing my shit, though. And hey, kids are.. a lot... but I will say, men of much less resourcefulness than yourself have found fulfillment in adopting a kid. I struggle with patience, personally. I'll try to make the meetup, but my schedule really clears up in mid April.
Eh, I'm being my usual exaggerated dismissive, but it's sad that the two most visible to me camps are essentially "it's only so unbiasedly rational of us to consider how many AGI could dance on the needle's head" and "mathless/IFLS quantum vibes" types. It's not even that I don't see the merits of those two, let alone philosophy at large, but that I have absolutely no fucking interest in either yet they keep talking at me like I'm a lobotomite for not caring. And no, wasoxygen, I'm not calling you out specifically, it's just how you Yudkowsky-ites communicate. We're cool, I hope. Well, ML/whatever excels at finding patterns, even if it can't/won't explain them. Having a tool that goes "exploring these parameter spaces is most likely worthless" or even "isn't it funny how second order solitons only form when this parameter is divisible by 17?" may be invaluable to a right person who can find context to those observations. That's the "(or something)" in my previous comment. Tying this to "making sure conclusions are correct-ish is going to be hard to replicate." <- that's the bottleneck as far as I can see. First you have to separate seeds from chaff, and then make sure those seeds aren't blighty or cleverly disguised angry bears. I wouldn't mind science becoming (even more) akin to computer-assisted chess, though. Tools are tools, experts use tools better, so that checks out too. Wasn't singling you out here, though I hope you take care of yourself and wife. And it's not like I don't understand or lack the presence of mind to understand why people are so agitated. I simply can't keep dealing with it. It's been two goddamned years, and I can't even force myself to go to Ukraine anymore. I haven't seen the worst, and it's too much. Focusing on what I can affect has to be enough for me right now. As to meetups: no worries, I can make another one in April or May. They're about as informal as flip-flops anyway.Philosophy deserves every burn. Sorry. But only a little.
Not sure why you'd want a middleman, either.
Losing shits and meetups
Since I'm like public journaling now instead of just allowing thoughts to pass through my head without any reinforcement and then showing up to hubski like "oh, I don't have anything", I'll give an example. "If I tried to LLM at work". There's a global model of the magnetosphere and surrounding solar wind environment that I run through a public website. I query the model for a certain day or time that I want (step 1). Wait a few days, then I look through the results and do the science (step 2). For step 1, there is no benefit in having a program input the date and time with a few choices that I make for which sub-components of the magnetosphere model I want to use, because it takes about five minutes. For step 2, the way that I look through the data requires an entire methodology in which I'm using outputs from the model to re-input back into the next time-step for visualization. I'm tracing magnetic field lines through time/space and the magnetosphere as it convects (I've automated it using a python webcrawler and maths to produce a movie). The idea that I could simply ask an LLM to do this is pretty funny. It's so specialized that I can guarantee it would fail immensely to know wtf I meant when I said "take the results from this model run and show me a movie of magnetospheric convection. I want bundles of magnetic field lines that pass through the reconnection site near satellite XYZ emphasized". I think the amount of additional information I would need to feed it for the thing to even come close is infinite, because it's probably never going to give me something good. More on that below. But let's say that it does. It's the game of "how do I know it's right?" again. I've gotta inspect all of the code that it wrote to do it, and I can guarantee that it's gonna be an implementation that's a way different structure than mine. I'm going to put in so much effort checking it that I'm not going to save an iota of time. OK, so I have my video, one way or the other. I can now look through it and do the actual science, linking it into an analysis of data from that satellite. There is simply no fucking way that any LLM or AGI on the foreseeable horizon could do this. Doing the science means comparing the new m'sphere model outputs to the existing data analysis, linking new interesting/publishable physics of the two, discussing how this is different or similar to previous studies, and thinking about how the results can be applied towards the next step. It requires a deep understanding of how this contributes to the field. This is at least approaching ASI territory. Furthermore, for the science, the LLM or whatever it is has no interest in images. It only cares only about model outputs. It would actually have to perform the conjugate of what I have to, and take the images from previous movies of magnetosphere convection and put them into a form for comparison with the magnetosphere model output data. The whatever it is will have to know how to transform the data into formats suitable for comparison, and then it'll have to have correctly ingested the publishing record to form a pseudo-understanding of everything. Can't imagine the lengths it would have to go to output something like "we can see that if the only difference is a Y-component reversal of the upstream magnetic field in the solar wind, the reconnection site moves southward towards the spacecraft, because the X-line is shifting to accommodate cusp reconnection relocating from the cusps on the dawn and north and dusk and south quadrants to the dawn, south and dusk, north quadrants, respectively". Would the Whatever know that it'd be good to run the magnetosphere model I used for the period of time used in the previous study, which used a completely separate m'sphere model, to factor in the differences between the two models that might explain the behavior instead? Does it know that it's important to comment on the distance from the satellite to the reconnection site? Is the data analysis conclusion that the satellite is at a reconnection site actually wrong? Are there shortcomings in the m'sphere model that help explain why the m'sphere model's reconnection site differs from where we actually found it? It's obviously not advisable to expect this inside of two or several decades. Maybe it could build me a movie, but I doubt it. Unless I am guaranteed a running instance of my efforts to coach it is preserved and always available should I achieve a successful/correct movie once, or that any new pseudo-understanding I had to lead it to is properly assimilated into the root system, there's no reason to even begin trying. Correct me if I'm wrong, but that's not something publicly available yet, and I can see massive hurdles to it ever happening. lol, what am I gonna say? "That's right! You finally did it. Now, don't forget how to do this the next time I ask, I don't want to have to spend another seven months filling in the gaps in your understanding of this again"? Hahahha "Filling in gaps of understanding" deserves a dissection, because it's more general, not just for physics or science, but for anything. The process looks like hell. Because, like we've said, the LLM doesn't know what's "correct", it's not going to ask you any substantive questions. It's going to output what it outputs, and you'll have to look at the outputs, and tell it why it's wrong. Iteratively. Having it fix one thing could break another. It could even infinitely diverge instead of ever converging on the solution you want it to. This all assumes that you know what you're looking for, what "right" means. And then, even if it does get things right, yeah, unless you work at the company that owns the LLM, it's all forgotten when you close the instance. Job security. Job security for all!
I had a discussion with an old buddy about LLMs yesterday. He's writing fiction and is using ChatGPT like a rented mule. He's got a character who's modeled on Andrew Tate but he wants him to be annoying, not a villain, so he'll type "give me ten things a sexist asshole would say about women that aren't awful." He's got a character who's a vampire so he'll type "give me a list of insults a vampire would use against townsfolk." Or he'll be analyzing plot points and he'll say "give me a list of movie scenes that would radically change the movie if they were absent." In each one he goes through and picks what he likes. In the last one he argues with it. I pointed out that he's basically using ChatGPT like an extended thesaurus and he agreed. I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant. But - I pointed out - when you ask it for an opinion it will fall down every time because it has absolutely no handles on any of its inputs and outputs. You can't ask it to tell you what scenes are crucial because it has no understanding of any of the concepts underneath. What it has is a diet of forum posts that it will never give you straight. Shall we play "how can chatGPT do my job?" 'cuz they've been trying to AI automate my job forever. See this guy? they were about $1500 back in '94. And what they do is analyze the audio signal passing through them looking for feedback, and then they drop one of eight filters on it. You can adjust the sensitivity to feedback, you can adjust the latch, you can adjust the release, you can adjust the aggressiveness. They were really big until about 2005 or so when it became cheap and easy to TEF sweep a room and ring it out to EQ out the frequencies that cause things to ring - I'm sitting here surrounded by ten speakers at 85dB and having spent an afternoon mapping and collating and inserting between 4 and 15 filters each channel I can't get feedback if I hold a condenser in front of left main. Could an AI have done that? fuck yeah. That would have been delightful. But not without me moving the mic sixty times so what time am I actually saving? That active seeking feedback reduciton thing has made it into machine tools - each servopak on my mill has more filters than that Sabine. And in general, the approach everyone takes is "set as many as you need to kill steady-state, use the roaming ones carefully" because who knows what modes you'll run into with this or that chunk of aluminum strapped down getting chewed up. Everything I've got is already a waveform. We've been using Fourier transforms to operate on them for 40 years. My life is nothing but math. And despite the fact that GraceNote has literally released every song they know about as training data, telling the AI "make my mix sound better" still fucking failwhales. Like, on a basic, simple level. It understands what the sonogram of a song should sound like but that's like reconstructing a fetus from an ultrasound. What you get is uncanny valley nightmare fuel. I don't need the mediocre middle of a million mixes, I need excellence. And excellence comes from humans because it is, by definition, not the mean. Anyone expecting that a machine purpose-built to give you a statistical average can give you only the good outliers is going to be disappointed for the simple fact that the machine doesn't understand "good" or "bad" it understands "highly rated" or "much engaged with." The machine thinks this is the best Jurassic Park cover ever made: And the only way you can deal with that is to nerf it out on a case-by-case basis. You could argue that LLMs are good for facts but not opinions but the problem is its method for handling facts only works for opinions. Are they useful? Yes. Are they a tool that will make big changes to a few industries? I don't see how they can't. Am I honestly excited to see their actual utility? You damn betcha. But where the world is now is this: People who don't understand AI inflicting it on people who don't need AI to the detriment of people who don't want AI. That's it. That's the game.
Ahh, of course, the feedback thing. I don't do anything live, so I can just get away with a pretty simple gate and headphones. No chance of loops. Hadn't really thought about how I would suppress feedback loops without killing the channel or at least lowering the volume. But now I completely get it. I got really close to connecting the dots a long time ago when I suggested basically TEF in a convo with you a few years back. My mistake was thinking about mixing. I was thinking about minimizing phase cancellations as a function of frequencies. But duh: My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate, and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too. We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys), and so we notched the input frequency spectral energy around the normal modes so we don't overdrive the thing during vibe testing. And then we shake it with the launch environment, a white-noise spectrum, still modestly notched around the normal mode frequencies (which might have needed slight readjustments from the sine sweep results). By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with. I'd guess it was for Saturn V's, hahah, but I don't know! Didn't get the story. (edit: ohhhhh, I think it might've been for cleaning, especially considering that it was being kept in one of the anterooms bordering a clean room. They must be using the thing to knock any loose particles off of equipment or instruments with sound. We did the same thing with an ultrasonic bath after de-greasing parts with trychloride, before the final isopropyl wipe down. They'd soundblast it after that. Probably a pretty clean room.) Which has its uses, heh, though perhaps mostly uncommercializable. Absolutely agree. The LLM is navigating topological features inside a parameter space. With boundaries, and curvature, yeah. It's what I'm doing for the magnetosphere, actually. Same kind of idea. Except with I dunno maybe a billion axes instead of the four I use. But yeah, sometimes if you move just a little bit in the parameter space from where you started last time, or you start off in a slightly different direction, the topology might map to some drastically different places. Occasionally they will conjoin into beauty. AISI; artificial idiot savant intelligence. Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed. We don't, agreed. I only want it for selfish reasons. And I only want it if I can feel assured it isn't going to cripple society. So I don't want it. Nvm. Feels like we're all getting a better handle on the level of complexity to expect though. It'll change. Hopefully not too fast, this has apparently been jarring enough for the world already, but AGI in two years? I just don't think so, and I'm 100% sure that ASI isn't only three years out.What you get is uncanny valley nightmare fuel
I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant.
...people who don't need AI...
that sounds so fucking awesome Well what you're doing is ringing out the frequency response, right? You're trying to find constructive modes that are going to fuck you over while strapped in a rocket. You do that with an equalizer if it's sound or filters if it's an electromechanical system. I've linked this before, the eldritch magic starts at 3:35: For the record the last time I used ANSYS it was a command-line program that ran on a DEC Alpha. that sounds so fucking awesome You are grossly underestimating the ease with which bad mixes can be produced. The computer music cats have been doing "generative music" for a long time. It's easy as shit and doesn't require an LLM. Most of them are some form of neural network somewhere; "random ambient generator" has been an off-the-shelf product category for 20 years. Here's a free plugin for Kontakt. Here's a walk-through for Ableton.My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate,
and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too.
We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys)
By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with.
Which has its uses, heh, though perhaps mostly uncommercializable.
Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed.
Absolutely. The normal modes. As it goes, first is the worst, second is the best, third is the one with the treasure chest. Sometimes it's "hairy chest", depends on the elementary school. When people use generative stuff in music well, it's noted. One of the most ridiculous arpeggio parts ever was made with Omnisphere's arpeggiator and then meticulously adapted for guitar. Probably took a little bit of practice (the rest of my life, in my case).Well what you're doing is ringing out the frequency response, right?
Dunno, probably not, but I think you could instantiate one that can when they can and freeze its learned ability, so the whole hoping it doesn't forget might go away. But I have no idea. Don't write that much code or work with raw data these days, so bibliographic aid is just about all it can do for me in an hour of need. Otherwise, it's about as tangential to my goings-on as it can get. When I tried that 'explain paper' site, it left enough of a distaste for me to roll eyes and move past. Between absolutely fucking insisting that some unrelated mathematical concept[0] is absolutely crucial to explain my question and rephrasing a circular argument until I got bored and left, I probably won't bother again for quite a while. Unfortunately, the above experience mean I'm unlikely to trust LLMs with stuff I don't know a lot about. Also, I kinda regret writing anything in this thread and will probably just add more tags to my ignored list. Fun company notwithstanding - too much hassle, too few fucks left. [0] - I wrote and deleted 900 word footnote of jargon about orbits of the coadjoint representation groups and operators in de Sitter space, so let's pretend I said Tits index and wiggled my eyebrows in an amusing way.Correct me if I'm wrong
That is the only way to fly, in my opinion, and we haven't discussed this much (edit: well nah we kinda have), but people aren't going to use it like that, obviously. Don't blame you for any filterings. I kinda like livening up this place. It's LLM season on hubski, baby. But one last quick story! I'm a couple miles from home standing in line to order a burger (probably in flip flops again) and a guy gets in the to-go line. Says "Order for so-and-so", and the cashier checks the order tickets. Nothin'. He says "I called such and such number". She refers to some post-its behind her, and sees that it's the other branch across town that he called and ordered from. He then says "watch", pulls up his phone, and goes "Siri, call Restaurant X on Street Y" (where we are), and it was replicable, it dialed the other branch again. He goes "so it's not my fault. I should get some food for free, I already paid". And I think he did. And he cut everyone in line. I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration. It's gonna be a fun time.... I'm unlikely to trust LLMs with stuff I don't know a lot about.
When every foodhole in Warsaw connected with delivery service overnight, outgoing orders had much much higher priority. So, during pandemic, you had a crowd of deliverers, normal line that moved at snail's pace, and a nearby crowd of people who placed their orders in an app to game the system. This lead to a situation where people from the last group placed order to <restaurant's address> and added comments like "I'm the one wearing a brown hat with a gigantic pompom" or "I'm already behind you." Insert something about follies of idiots with access technology. I don't know, I barely slept since Friday.I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration.
Yeah. Gonna be a lot of LLM Florida man stories. Same. But I do like checking back in here when I hit a roadblock at work. It's synergistic. Good luck with your coming week. Mine's gonna be crunch time, but I think I'm almost ready. Peaceeeeebarely slept since Friday
I could honestly benefit from an involved re-visiting of philosophy, but it doesn't really feel terribly necessary, all things considered, at the moment. This is my field's IFLS. Except not, because it's just flat out wrong, as opposed to a flavorful interpretation of quantum mechanics. No worries, fam is good. I'll periodically re-enter an "oh shit, it's fascism!" check-in phase, but I try to keep it Stoic more of the time, these days. Like, I'm not chanting the serenity prayer, just wishing for the same thing more often than I used to. OH, that reminds me: For your consideration, I'd like to submit the most American thing ever done, possibly. I wore my flip-flops to the McDonald's in downtown Bern, Switzerland, while unknowingly incubating covid that I'd gotten on the plane ride. A homeless woman outside goes "PLUGH, l'Americaine!", and all I could do was think to myself "I know, right?".They're about as informal as flip-flops anyway.