How would you feel about AI being used for deceased voice actors?

Discussion regarding the entirety of the franchise in a general (meta) sense, including such aspects as: production, trends, merchandise, fan culture, and more.
User avatar
Skar
I Live Here
Posts: 2327
Joined: Mon Jul 01, 2013 11:04 pm
Location: US

Re: How would you feel about AI being used for deceased voice actors?

Post by Skar » Thu Jun 12, 2025 10:15 am

Jord wrote: Thu Jun 12, 2025 9:04 amIn theory you are right but in practice companies often interpret these things differently.
Let's say the next DB show has AI voice acting for Goku.
You respond by not buying the DVD-set/not streaming
The company may see this as other aspects of the show not clicking with the audience
You'd have to boycott and contact the company, such as with social media to make a statement. And you'd have to be a majority, with lots of people not buying it.

It would be interesting to see what would happen and how many people would care enough about it.
Yeah unfortunately boycotts vary in how effective they've been. I just mean it's the only option for an average person to make any difference even if only minor. Sharing news and spreading awareness is important but doesn't reallly matter in the end if enough people are still willingly giving their money to these companies.

An example is that I'm part Palestinian and boycott companies that support Israeli government and settlements. More and more people are boycotting worldwide and these companies have lost some money. The vast majority of people don't care to boycott. They're able to acknowledge what's happening there is tragic but it's not enough to overcome the convenience of buying from these corporations instead of spending a few minutes to find an alternative company to buy from.

Another example is the "support local" movement with some people only buying from local or smaller companies even if it costs more. Most average people agree that it's not good for major corporations to keep growing bigger, consumers having a fewer options, lobbying politicians, etc but not many would consider to stop contributing their money since local or smaller company might cost more or might not be available in all stores.

I think the difference with AI is that the alternative is more convenient than supporting the studio. Everyone has access to pirated content with just a few second search on Google. It's much easier than driving to the theater or paying for multiple streaming services. I feel the only reason to choose the less convenient option is if your goal is to support the people working on the content. I'm not advocating for anyone to pirate media but just pointing out that people are fine doing it despite being told it's wrong because it's so accessible and easy to find.

User avatar
BootyCheeksJohnson
Beyond-the-Beyond Newbie
Posts: 412
Joined: Wed Jan 13, 2021 6:12 am

Re: How would you feel about AI being used for deceased voice actors?

Post by BootyCheeksJohnson » Fri Jun 13, 2025 10:10 am

JulieYBM wrote: Mon Jun 09, 2025 10:55 am Not to wax poetic on main, but:

Lately, I've thought about how difficult it must be to be a parent. As someone who loves the arts, it would be a nightmare scenario to never be able to teach a child to love the arts and the rewarding feelings that come with appreciating and creating art.

This AI shit—and the cultish support for it in so many places—really does run in opposition to all that. I'd be ashamed of myself for raising someone who saw AI as some sort of acceptable experience and replacement for acting doing the creative work necessary to create a piece of art.
DC Douglas who voiced Wesker in R.E. for the longest run support's generative A.I. art, but not for voice acting. So even real "artists" are committing the "do as I say, not as I do" act. Rather unfortunate.
We need a Steve Simmons' re-translation of the manga.

User avatar
JulieYBM
Patreon Supporter
Posts: 18512
Joined: Mon Jan 16, 2006 10:25 pm
Location: 🏳️‍⚧️🍉

Re: How would you feel about AI being used for deceased voice actors?

Post by JulieYBM » Fri Jun 13, 2025 10:23 am

BootyCheeksJohnson wrote: Fri Jun 13, 2025 10:10 am
JulieYBM wrote: Mon Jun 09, 2025 10:55 am Not to wax poetic on main, but:

Lately, I've thought about how difficult it must be to be a parent. As someone who loves the arts, it would be a nightmare scenario to never be able to teach a child to love the arts and the rewarding feelings that come with appreciating and creating art.

This AI shit—and the cultish support for it in so many places—really does run in opposition to all that. I'd be ashamed of myself for raising someone who saw AI as some sort of acceptable experience and replacement for acting doing the creative work necessary to create a piece of art.
DC Douglas who voiced Wesker in R.E. for the longest run support's generative A.I. art, but not for voice acting. So even real "artists" are committing the "do as I say, not as I do" act. Rather unfortunate.
Yeah, I'm not surprised. Grifters exist in minorities, too, so it's not surprising to me at all that they would exist among artists, too.
💙💜💖 She/Her 💙💜💖

User avatar
Hellspawn28
Patreon Supporter
Posts: 15693
Joined: Mon Sep 07, 2009 9:50 pm
Location: Maryland, USA

Re: How would you feel about AI being used for deceased voice actors?

Post by Hellspawn28 » Fri Jun 13, 2025 3:13 pm

It's sad how many people will defend AI for art and acting. If AI was used for any other work field, people would lose their shit about it. It shows that people don't care about art as a whole.
She/Her
PS5 username: Guyver_Spawn_27
LB Profile: https://letterboxd.com/Hellspawn28/

User avatar
JulieYBM
Patreon Supporter
Posts: 18512
Joined: Mon Jan 16, 2006 10:25 pm
Location: 🏳️‍⚧️🍉

Re: How would you feel about AI being used for deceased voice actors?

Post by JulieYBM » Fri Jun 13, 2025 3:17 pm

I once had a customer who looked like Christian Bale and when I mentioned it, he said, "I have a real job." That just about says it all when it comes to how we've devalued art in our society lol
💙💜💖 She/Her 💙💜💖

User avatar
MasenkoHA
Born 'n Bred Here
Posts: 7271
Joined: Fri Feb 24, 2017 9:38 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by MasenkoHA » Fri Jun 13, 2025 3:50 pm

Hellspawn28 wrote: Fri Jun 13, 2025 3:13 pm It's sad how many people will defend AI for art and acting. If AI was used for any other work field, people would lose their shit about it. It shows that people don't care about art as a whole.
Ethics aside. AI art is fucking hideous and the less we see of it the better. The amount of times I unfollowed people on social media because they kept sharing AI art that was an assault on my eyes is unreal.

User avatar
JulieYBM
Patreon Supporter
Posts: 18512
Joined: Mon Jan 16, 2006 10:25 pm
Location: 🏳️‍⚧️🍉

Re: How would you feel about AI being used for deceased voice actors?

Post by JulieYBM » Fri Jun 13, 2025 5:04 pm

MasenkoHA wrote: Fri Jun 13, 2025 3:50 pm
Hellspawn28 wrote: Fri Jun 13, 2025 3:13 pm It's sad how many people will defend AI for art and acting. If AI was used for any other work field, people would lose their shit about it. It shows that people don't care about art as a whole.
Ethics aside. AI art is fucking hideous and the less we see of it the better. The amount of times I unfollowed people on social media because they kept sharing AI art that was an assault on my eyes is unreal.
Oh god, yeah. I am so glad my feed is curated on Twitter lol. Any time I see it I'm search results I block immediately.
💙💜💖 She/Her 💙💜💖

User avatar
goku the krump dancer
I Live Here
Posts: 3675
Joined: Sat Sep 12, 2009 10:34 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by goku the krump dancer » Mon Jun 16, 2025 11:19 am

I’m not as bothered by the AI anime art because my default instinct as someone with an artistic hand is to not take it super seriously at all Like I doubt there’s gonna come a point where companies are gonna use AI in its totality to do concept art and character designs for future video games and such. It is interesting how detailed even some of these AI videos can get just by using the right program and prompting it the right way though.

On Topic, the only way I could ever be even mildly comfortable with AI replacing a voice actor is if the person literally passed away during the final moments of recording and they had to us AI just to wrap up the last few minutes of dialogue or something, but it’d had to be really advanced tech otherwise it’d come off painfully obvious and maybe take away from the project. Otherwise just find a human replacement.
It's not too late. One day, it will be.
Peace And Power MF DOOM!
Peace and Power Kevin Samuels

User avatar
JulieYBM
Patreon Supporter
Posts: 18512
Joined: Mon Jan 16, 2006 10:25 pm
Location: 🏳️‍⚧️🍉

Re: How would you feel about AI being used for deceased voice actors?

Post by JulieYBM » Mon Jun 16, 2025 11:31 am

Companies have historically done everything they can to not pay labor. The push towards AI is specifically to both not pay artists and to remove the individual meaning from art. After all, an AI won't create art with a meaning thst undermines the company's bottomline, ala art with a leftist, pro-union message.

Capitalist and the wealthy are not our friends, they want to control and kill us. AI helps with that.
💙💜💖 She/Her 💙💜💖

User avatar
goku the krump dancer
I Live Here
Posts: 3675
Joined: Sat Sep 12, 2009 10:34 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by goku the krump dancer » Mon Jun 16, 2025 1:00 pm

Eh Idk, it’s not that doom and gloom for me. The AI art doesn’t even look good enough to even consider replacing what a team of really talented people and the right budget can pull off. The Ghibli trend from a few months back was fun but nothing looked so convincing to the point where folks legit thought Studio Ghibli could be in serious trouble of being replaced.
It's not too late. One day, it will be.
Peace And Power MF DOOM!
Peace and Power Kevin Samuels

User avatar
Hellspawn28
Patreon Supporter
Posts: 15693
Joined: Mon Sep 07, 2009 9:50 pm
Location: Maryland, USA

Re: How would you feel about AI being used for deceased voice actors?

Post by Hellspawn28 » Mon Jun 16, 2025 3:10 pm

The AI Ghibli art is not even good if it was made by a person.
She/Her
PS5 username: Guyver_Spawn_27
LB Profile: https://letterboxd.com/Hellspawn28/

User avatar
goku the krump dancer
I Live Here
Posts: 3675
Joined: Sat Sep 12, 2009 10:34 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by goku the krump dancer » Mon Jun 16, 2025 4:03 pm

Hellspawn28 wrote: Mon Jun 16, 2025 3:10 pm The AI Ghibli art is not even good if it was made by a person.
Yeah that’s kinda the point I was making, while I personally found the trend fun as a spectacle to see people recreate scenes from movies and tv shows in the style of a Ghibli project, none of the artwork in and of itself was good enough to consider replacing real artist.
It's not too late. One day, it will be.
Peace And Power MF DOOM!
Peace and Power Kevin Samuels

User avatar
FoolsGil
Born 'n Bred Here
Posts: 5038
Joined: Tue May 21, 2013 10:37 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by FoolsGil » Mon Jun 23, 2025 1:58 pm

On one hand, if done ethically (estate permission, paid royalties, only use the recordings of the voice actor) It wouldn't be bad.

On the other, that's a job lost by a living VA. And at full exploitation, a production would only used dead VAs, and potential careers would never get started.

Dr. Casey
OMG CRAZY REGEN
Posts: 940
Joined: Sat Aug 11, 2007 7:05 pm

Re: How would you feel about AI being used for deceased voice actors?

Post by Dr. Casey » Wed Jul 02, 2025 1:57 am

Okay. I haven't read much of this thread, but given that I'm seemingly much more familiar with AI than anyone else here, I think I should give Kanzenshuu a primer.

Starting with the Chicago-Sun Times story about the 15 books, I dug into the story some more. Turns out that story is horrifically misleading. Yeah, the Chicago Suns guy should have done his research, but so should have the person who wrote the article reporting on the incident because there's some pretty fucking massive misrepresentation going on here.

For starters: just as I expected, no, the AI used was not a modern state-of-the-art AI. The AI used could have been as old as late 2022 or early 2023. By AI standards that is terribly old, and by no means whatsoever should it be used to gauge the capabilities of the technology. To cite this example as being relevant (as the news article does) would be akin to someone confidently stating in 1995 that videogames never, and cannot, include voice acting based on a sample of 100 Sega Master System games... unaware that if you move beyond 8-bit consoles, yes, voice acting existed by the late 80s at the latest in games like Ys.

Just as importantly, Search was not enabled - the tool that allows AI to access the internet for information.

Here's a bit of AI 101. If you disable tool use, of fucking course AI won't perform as well. LLMs are supposed to be able to utilize tools - Python, web search, various other programs and APIs. Tool use only started to emerge during late 2023 and early 2024, and their absence during the early days is a large part of why certain negative stereotypes arose surrounding AI during 2023 - without tool use, raw LLMs are much less capable.

It's not at all reasonable for the article to criticize the technology when a very important feature that helps it to work properly was disabled (or perhaps nonexistent if the AI really was two and a half to three years old). That would be like if a guy in 1985 played his first videogame, let's say Super Mario Bros. 1, and his controller got unplugged and he was no longer able to control Mario. He becomes enraged. "What the FUCK is going on? I can't play just because the controller is unplugged?!?!?????!!!!!! What the fuck! That makes no sense! I KNEW videogames were a scam!"

You can use any numbers of similar analogies here. A 40 year old man in 1915 becomes bewildered and confused because the "lightbulb" thing turns off when the "power" goes out. A person who gets their first TV in 1954 thinks it isn't working right because 24/7 TV isn't a thing yet and so it displays static at night. Use whatever analogy you like, they're all about the same as what's going on here.

Hallucinations (AI making things up) is indeed still a problem worth noting and considering, but the problem has become dramatically better over time in terms of both hallucination frequency as well as the nature/severity of the hallucinations themselves.
To compare o3 (the current publicly available SOTA), GPT-4.1 (A currently available model but only for a small audience), and GPT-5 (Upcoming flagship model): o3, with tools enabled, has a hallucination rate of around 7 percent. That's much better than the 25 percent of July 2023, but still significant. GPT-4.1, available to a small audience, has a hallucination rate of around 1.7 percent - that's dramatically better and maybe negligible. GPT-5, slated for a summer release, should get the hallucination rate under one percent.

The severity of the hallucinations has also declined. 2024 AI has a generally true-to-reality basic understanding of the world, but makes bizarre mistakes that no human being ever would and pulls facts out of its ass that - though plausible - are entirely false. (Keep in mind that the best public model, OpenAI o3, is from December 2024 at latest. It is already outdated.) People who have used 4.1, though, say that the mistakes it makes are much milder. No bizarre fabrications, just understandable mistakes - things like misunderstandings of ambiguously-worded sentences rather than the present day "lol I made it the fuck up." GPT-5 will be dramatically smarter and make even fewer mistakes, sub-1 percent and typically of a mild, human-like nature.

4.1 has also been stated by testers to say "I don't know" when it doesn't know something. No, I don't know why this is an ability that's only being developed in 2025-era AI, it seems like basic common sense design that should have been implemented years ago, but better late than never. This means that incidents like the Chicago Sun-Times mishap are extremely unlikely with 2025 AI. This isn't speculation, it's already confirmed stuff from models being tested.

So the "AI isn't connected to reality" idea that that news article postulates isn't exactly untrue, but it's a heavily biased piece of work that omits any inconvenient details. "AI is completely unreliable! Look at this mistake! No, I will not supply any context. I will not mention that at best it was a modern AI with critical features disabled, and at worst it's a very old, very dumb AI from before those features even existed. Oh, and I definitely won't mention that this flaw has basically been erased in AI currently in testing, who cares about unimportant details like that? It's the 2020s and being a part of the AI witch hunt is fucking in!"

To prove with an example just how misleading that article about the Chicago Sun-Times was, I asked the exact same question to o3 with search enabled. I double-checked every bit of info to see how well it performed. I only gave it one chance, I didn't re-roll for the best results, and in order to stress test the model I said that each book recommendation had to be from a different source.

All 15 books are indeed real books. Zero fabrication here.
All 15 authors were correctly attributed to their respective book.
12 of the links provided were proper links (the other three links were mislinks; I had to use Google Search to confirm that the books were real).
11 of the 15 'publication dates' - the publication dates for the articles, not the books themselves - were correct. The four that were wrong all got the correct month but an incorrect date.

So, with a maximum score of 60 here, o3 scores a 52. That proves pretty decisively the article was extremely misleading. The entire point of the article was that AI made up 10 out of 15 summer reading books; I'm just a regular person rather than an AI expert, and even I knew the solution. Using a modern model with search enabled, all 15 books were real and all 15 authors were indeed the authors. 100 percent perfect performance.

I'm not the only person who thinks this, but AI entered mainstream awareness too soon. Once people form a negative impression of something, it can take a long time for society to catch up and accept the new reality if that 'something' ever improves. Some products, ideas, whatever still have a negative reputation due to early failures from years or decades ago. I'm sure that AI's redemption arc won't take as long, but it wasn't a good idea to begin spreading AI in late 2022 when it was an extremely half-baked product. 2023 AI was basically terrible, 2024 was still largely bad. Even here in July 2025 it's sometimes great, sometimes frustrating. Q3 2025 should be where friction mostly disappears for the SOTA, perhaps expensive SOTA, chatbots (low hallucination rates etc); by the end of 2026 hallucinations should be irrelevant for all AI models, low and medium tiers included, and all other areas of AI like video generation and coding should be fully reliable.

Even current AI is highly useful for many things, though. I got o3 to read the 2.4 million word series I wrote from back when I was a writer (I retired after finishing in 2018 and am never writing again), and asked many questions about it. What MPA rating each individual story would receive were they movies, each character's MBTI or D&D alignment, what characters handled their traumas the best and worst, whether characters with bad attributes (whether full-on villain or toxic protagonist) were sympathetic or not, what mental illnesses or personality disorders the characters might have, etc. It was fun. The experience wasn't perfect, it's 2025 and hallucinations exist, but it got 10x or 20x more facts right than wrong, and its feedback was really thoughtful and interesting.

As a random aside. I don't know if the topic was broached much here because I barely even skimmed the thread, but as a former artist who wrote one of the longest books in all of human history - unpublished and only written for myself - I'm very much in favor of AI in art (minus some obvious misuses like job displacement or actual plagiarism).

I think a lot of people who are firmly against any form of AI art have tunnel vision and don't understand that for some use cases, AI art is perfectly valid - namely cases where the product simply will not exist otherwise because the person doesn't have the ability or willingness to create it. I always thought that it would be fun to have AI create fanfics or bonus stories based off my book. I am a retired writer and there is absolutely no way in hell I am ever writing another story. AI excels for that purpose - stories that won't exist otherwise because you don't really care and don't have any drive to write them (and honestly the idea of stories I didn't write myself, where I don't know what to expect, sounds really fun).

I never would have used AI for the series proper had it existed back then, nor for any other genuine or heartfelt project. Not because I'm anti-AI (Obviously) but because I was the writer and I wanted every sentence, paragraph, and page to come from me. It wasn't good enough for it merely to exist at all, as is the case with the hypothetical bonus stories; I wanted the story to exist in the precise form that I wanted it to exist.

So it's not as simple as AI art always bad, manual work always good. It's a matter of precision vs. convenience. If you draw, write, or compose something, the finished product takes whatever form you want it to take. You usually want precision for heartfelt projects, obviously. But there's also some cases where you simply want something to exist and adhering to a very specific vision is not the goal. In that case, convenience is the goal, not precision, and at those times AI art is entirely okay.

Some of the people in the 2020s crusading against AI seem to believe that in order to have 'artist's pride' or whatever, you must be a Luddite that gatekeeps art against everyone except those who do things the 'proper' way. That is not even remotely true in any way whatsoever. If I'm to be perfectly honest, I almost certainly went through more to see my story through to completion than most of the artists online stirring up a shitstorm. I wrote a series of novels that in length trumps almost every other story ever written (some would say the story was too long and I needed an editor but fuck 'em, story was just for myself anyway), and I did it through chronic fatigue, through depression (both conditions showing up years before I even wrote the very first word), through burnout and writer's fatigue that I started experiencing four full years before I even finished the original series, with the sequel adding another two years (albeit after a five year vacation between the original and sequel series). Very few people would have pushed through the way that I did - that's evident by how many writing projects are dropped due to life obligations despite being infinitely shorter. I wrote all the way to the end despite the emotional and physical toll because I loved the story and wanted it to be told in full. I think about the story and characters frequently, and I always will no matter how long I live.

I have pride in my story, and by extension I suppose I have artist's pride (though I'd rather not use a term that cheesy). But that does not translate to a desire to be a gatekeeper. I can understand being against certain use cases of AI art... selling it, actual plagiarism (no, AI art is typically not plagiarism, it learns general patterns and rules the same way that humans do), job displacement, whatever... but being against it on a conceptual level and thinking that nobody should ever use it in any context is absolutely indefensible.

Generally speaking, I am always in favor of things being democratized. Of possibilities opening up and for more and more people gaining the ability to consume and access, or in this case create, things. Piss on artist's pride. Artist's pride is nowhere near as important to me (nor should it be to anyone) as the humanitarian joy of seeing abilities, possibilities, and forms of happiness and meaning reach as many people as they can.

Anyway...

There is a severe lack of understanding regarding AI. Some think it's incompetent and unreliable in every possible way; others think that it's competent and capable but has no potential benefits and is nothing but a tool for destruction. The former has already been addressed, and I would advise everyone to be skeptical of articles that amount to "lol AI sux and is so dum lol." Most beliefs about AI's supposedly underwhelming capabilities are not based upon actual reality, but half-truths, things that used to be true but no longer are, and things which were always complete myths even in the past.

There are very few techno-skeptics who have any appreciable understanding of the technology they have such strong feelings on. In the abstract, I get it. You don't read as much about things you hate as you do things that you enjoy. If every positive opinion and every negative opinion throughout history was collected and scored on its understanding and validity, the 'positive opinion' column would have a higher score - people explore things they like more in-depth. But the record needs to be corrected regardless since most people's understanding of AI is frozen in either September 2022 (when AI won the art contest) or December 2022 (ChatGPT debut).

Almost all technologies have debuted in an extremely primitive state. Telephones started with party lines, an idea that sounds absolutely atrocious by modern standards (and were likely a turn-off for many even at the time). 'Horror movies' of the 1890s were silent films under a minute long. 1980s PCs didn't have sound cards until the late decade and games required constant changing of floppy disks. Televisions had extremely fickle reception with people sometimes having to go onto the roof and hold the antenna straight. But AI, unlike every other technology throughout history, will never ever have its problems addressed (even though to a large extent it already has) and they will be forever immutable and unfixable because... reasons?

As for the idea that AI is purely destructive and cannot be used for good (this also addresses the idea of AI being incapable).
Is the fact that AI diagnoses complex medical conditions 4x more accurately than any medical professional (85% vs 20%) an inconsequential change? Is living in a world where no doctor tells obviously sick patients "It's all in your head" not one worth desiring?

Is the fact that kids who use AI tutors perform exceptionally well (In a Texas study, kids who used AI tutors scored higher than 98 percent of their peers; in a separate test, Nigerian children learned as much in six weeks as most do in two years) irrelevant? Is democratized, high-quality education for all, including those in the third world, not a good thing?

There's an at-home foot scanner which you stand on like a scale that detects developing heart failure by measuring fluid build-up in your feet; if I'm understanding correctly, that means heart attacks and strokes can be predicted in advance - anywhere between 3 and 19 days according to tests. Is making the number one cause of death vastly more predictable and thus treatable something that doesn't matter? Is making people much safer and guarding against the most common causes of sudden death not that big a deal?

What about the fact that the relatively much more primitive AI of 2021 reduced hospital mortality by 40 percent at the hospitals where it was piloted due to being able to detect sepsis well before it developed? Is reducing a very common cause of death by almost half rendered irrelevant because people post bad AI pictures on Pixiv?

Is the fact that AI has created a lifelong vaccine for the flu that protects against all strands not important? Is it not something that improves quality of life and basic safety for the human race?

How about the fact that AI-created medicines are about to reach the clinic and will be very cheap and affordable since AI makes the drug discovery process so much cheaper? Sounds like something worth celebrating to me.

Those are just a few examples of the good AI has already done. It will do an insane amount of good for healthcare during the remainder of the 20s and make life and future health dramatically safer (the AI foot scanner being an easy current example), but I won't get into that because people will probably just say the tired old bullshit about moon bases being predicted for 2000, the electricity too cheap to meter process from the 1950s, LOL where is my jetpack XD, etc - ignoring (or likely completely unaware) of the many 'optimist' predictions that were exactly right and the many 'grounded skeptic' predictions that were miserably wrong. I think people love reductive thinking more than they love anything else in the world.

Despite some stereotypes, out-of-touch Silicon Valley types aren't the only ones who advocate for AI. Though my own household while growing up was okay, I live in a generally poor and drug-ridden area - in the current day my county is around 200th place amongst the country's ~3100. The first time I played Final Fantasy VII as a 12 year old, Midgar was fascinating and felt deeply familiar because its populace - bitter, depressed druggies living in squalor - is similar to my local area. It's not a particularly violent place, I don't think, but certainly very dreary and gray. Seeing so many people with no hope due to their economic circumstances made me hope for technology to offer them stability and a better life than that which they'd ever known.

I want a world where all forms of suffering are escapable, and where nobody has to experience shame or low self-esteem. A large part of solving problems is knowing how to solve them. Since AI works thousands, millions, or billions of times faster than human beings (this has been confirmed numerous times in multiple areas the past three years and is not speculation), it can be leveraged to figuring out the solution to any problem. I know some would be difficult or impossible to implement (Middle East wars etc), but some/most wouldn't have much blocking them off.

And obviously I want some of the benefit for myself. As someone with severe chronic fatigue syndrome where mild exertion can leave me broken for days, Avoidant Personality Disorder plus CPTSD, and being unable to drive due to having my brains fried back in the day (stroke during my birth) or communicate well (I've had lifelong speech problems and auditory sensory gating issues so severe that I can't form a sentence in anything except silent environments; noises aren't louder or uncomfortable, it's more like a distorted phone call with a poor connection), I need some rescuing myself. I'm very pro-AI in large part because the promises of AI finally nearing commercial availability have helped me from much of the vulnerability, insecurity, and overwhelming daily anxiety I've known since 2014. Self-driving cars... AI discovering chronic fatigue treatments and accelerating the medical pipeline... AI-based hearing aids that improve sensory gating much more than normal hearing aids (not theoretical, already exist)... it's AI that will help me and bring good and safety and security to my life moreso than people ever have, unless you count the scientists who invented those things.

I know the day will come where I can go to sleep knowing that there's no more hurting people, or at least that any form of suffering can be relieved. And much of that reduced suffering will be the result of AI.

AI is indeed a job destroyer. It is, I suppose, bad for artists in some ways. But it's many other things. I understand why progressives hate it so much since so much attention has been drawn to its potential for increasing inequality, but it also has the potential to be the most democratizing technology of all time and reduce inequality more than any other.

You don't have to be as emphatic as me. But if you care about the person who looks in the mirror and thinks of themselves as ugly, then pray for AI to advance cosmetic treatments so much that everyone in the world can have the face they want and no longer have to feel self-hatred and shame. If you care about drug addicts, remember that AI has already been proven to analyze the brain thousands of times faster than human scientists and will find easy cures for addictions. If you care about the sick and suffering, be thankful that AI-developed medicines can be affordable without insurance. If you care about the poverty-stricken, be glad that AI can dramatically deflate the costs of many goods and services.

Democratize everything and eliminate as many forms of suffering as you can. AI will do that to whatever extent is physically possible, and that's why I will always support it.
Princess Snake avatars courtesy of Kunzait, Chibi Goku avatar from Velasa.

User avatar
tonysoprano300
Beyond-the-Beyond Newbie
Posts: 325
Joined: Fri Mar 01, 2024 2:40 am

Re: How would you feel about AI being used for deceased voice actors?

Post by tonysoprano300 » Wed Jul 02, 2025 4:06 pm

Dr. Casey wrote: Wed Jul 02, 2025 1:57 am Okay. I haven't read much of this thread, but given that I'm seemingly much more familiar with AI than anyone else here, I think I should give Kanzenshuu a primer.

Starting with the Chicago-Sun Times story about the 15 books, I dug into the story some more. Turns out that story is horrifically misleading. Yeah, the Chicago Suns guy should have done his research, but so should have the person who wrote the article reporting on the incident because there's some pretty fucking massive misrepresentation going on here.

For starters: just as I expected, no, the AI used was not a modern state-of-the-art AI. The AI used could have been as old as late 2022 or early 2023. By AI standards that is terribly old, and by no means whatsoever should it be used to gauge the capabilities of the technology. To cite this example as being relevant (as the news article does) would be akin to someone confidently stating in 1995 that videogames never, and cannot, include voice acting based on a sample of 100 Sega Master System games... unaware that if you move beyond 8-bit consoles, yes, voice acting existed by the late 80s at the latest in games like Ys.

Just as importantly, Search was not enabled - the tool that allows AI to access the internet for information.

Here's a bit of AI 101. If you disable tool use, of fucking course AI won't perform as well. LLMs are supposed to be able to utilize tools - Python, web search, various other programs and APIs. Tool use only started to emerge during late 2023 and early 2024, and their absence during the early days is a large part of why certain negative stereotypes arose surrounding AI during 2023 - without tool use, raw LLMs are much less capable.

It's not at all reasonable for the article to criticize the technology when a very important feature that helps it to work properly was disabled (or perhaps nonexistent if the AI really was two and a half to three years old). That would be like if a guy in 1985 played his first videogame, let's say Super Mario Bros. 1, and his controller got unplugged and he was no longer able to control Mario. He becomes enraged. "What the FUCK is going on? I can't play just because the controller is unplugged?!?!?????!!!!!! What the fuck! That makes no sense! I KNEW videogames were a scam!"

You can use any numbers of similar analogies here. A 40 year old man in 1915 becomes bewildered and confused because the "lightbulb" thing turns off when the "power" goes out. A person who gets their first TV in 1954 thinks it isn't working right because 24/7 TV isn't a thing yet and so it displays static at night. Use whatever analogy you like, they're all about the same as what's going on here.

Hallucinations (AI making things up) is indeed still a problem worth noting and considering, but the problem has become dramatically better over time in terms of both hallucination frequency as well as the nature/severity of the hallucinations themselves.
To compare o3 (the current publicly available SOTA), GPT-4.1 (A currently available model but only for a small audience), and GPT-5 (Upcoming flagship model): o3, with tools enabled, has a hallucination rate of around 7 percent. That's much better than the 25 percent of July 2023, but still significant. GPT-4.1, available to a small audience, has a hallucination rate of around 1.7 percent - that's dramatically better and maybe negligible. GPT-5, slated for a summer release, should get the hallucination rate under one percent.

The severity of the hallucinations has also declined. 2024 AI has a generally true-to-reality basic understanding of the world, but makes bizarre mistakes that no human being ever would and pulls facts out of its ass that - though plausible - are entirely false. (Keep in mind that the best public model, OpenAI o3, is from December 2024 at latest. It is already outdated.) People who have used 4.1, though, say that the mistakes it makes are much milder. No bizarre fabrications, just understandable mistakes - things like misunderstandings of ambiguously-worded sentences rather than the present day "lol I made it the fuck up." GPT-5 will be dramatically smarter and make even fewer mistakes, sub-1 percent and typically of a mild, human-like nature.

4.1 has also been stated by testers to say "I don't know" when it doesn't know something. No, I don't know why this is an ability that's only being developed in 2025-era AI, it seems like basic common sense design that should have been implemented years ago, but better late than never. This means that incidents like the Chicago Sun-Times mishap are extremely unlikely with 2025 AI. This isn't speculation, it's already confirmed stuff from models being tested.

So the "AI isn't connected to reality" idea that that news article postulates isn't exactly untrue, but it's a heavily biased piece of work that omits any inconvenient details. "AI is completely unreliable! Look at this mistake! No, I will not supply any context. I will not mention that at best it was a modern AI with critical features disabled, and at worst it's a very old, very dumb AI from before those features even existed. Oh, and I definitely won't mention that this flaw has basically been erased in AI currently in testing, who cares about unimportant details like that? It's the 2020s and being a part of the AI witch hunt is fucking in!"

To prove with an example just how misleading that article about the Chicago Sun-Times was, I asked the exact same question to o3 with search enabled. I double-checked every bit of info to see how well it performed. I only gave it one chance, I didn't re-roll for the best results, and in order to stress test the model I said that each book recommendation had to be from a different source.

All 15 books are indeed real books. Zero fabrication here.
All 15 authors were correctly attributed to their respective book.
12 of the links provided were proper links (the other three links were mislinks; I had to use Google Search to confirm that the books were real).
11 of the 15 'publication dates' - the publication dates for the articles, not the books themselves - were correct. The four that were wrong all got the correct month but an incorrect date.

So, with a maximum score of 60 here, o3 scores a 52. That proves pretty decisively the article was extremely misleading. The entire point of the article was that AI made up 10 out of 15 summer reading books; I'm just a regular person rather than an AI expert, and even I knew the solution. Using a modern model with search enabled, all 15 books were real and all 15 authors were indeed the authors. 100 percent perfect performance.

I'm not the only person who thinks this, but AI entered mainstream awareness too soon. Once people form a negative impression of something, it can take a long time for society to catch up and accept the new reality if that 'something' ever improves. Some products, ideas, whatever still have a negative reputation due to early failures from years or decades ago. I'm sure that AI's redemption arc won't take as long, but it wasn't a good idea to begin spreading AI in late 2022 when it was an extremely half-baked product. 2023 AI was basically terrible, 2024 was still largely bad. Even here in July 2025 it's sometimes great, sometimes frustrating. Q3 2025 should be where friction mostly disappears for the SOTA, perhaps expensive SOTA, chatbots (low hallucination rates etc); by the end of 2026 hallucinations should be irrelevant for all AI models, low and medium tiers included, and all other areas of AI like video generation and coding should be fully reliable.

Even current AI is highly useful for many things, though. I got o3 to read the 2.4 million word series I wrote from back when I was a writer (I retired after finishing in 2018 and am never writing again), and asked many questions about it. What MPA rating each individual story would receive were they movies, each character's MBTI or D&D alignment, what characters handled their traumas the best and worst, whether characters with bad attributes (whether full-on villain or toxic protagonist) were sympathetic or not, what mental illnesses or personality disorders the characters might have, etc. It was fun. The experience wasn't perfect, it's 2025 and hallucinations exist, but it got 10x or 20x more facts right than wrong, and its feedback was really thoughtful and interesting.

As a random aside. I don't know if the topic was broached much here because I barely even skimmed the thread, but as a former artist who wrote one of the longest books in all of human history - unpublished and only written for myself - I'm very much in favor of AI in art (minus some obvious misuses like job displacement or actual plagiarism).

I think a lot of people who are firmly against any form of AI art have tunnel vision and don't understand that for some use cases, AI art is perfectly valid - namely cases where the product simply will not exist otherwise because the person doesn't have the ability or willingness to create it. I always thought that it would be fun to have AI create fanfics or bonus stories based off my book. I am a retired writer and there is absolutely no way in hell I am ever writing another story. AI excels for that purpose - stories that won't exist otherwise because you don't really care and don't have any drive to write them (and honestly the idea of stories I didn't write myself, where I don't know what to expect, sounds really fun).

I never would have used AI for the series proper had it existed back then, nor for any other genuine or heartfelt project. Not because I'm anti-AI (Obviously) but because I was the writer and I wanted every sentence, paragraph, and page to come from me. It wasn't good enough for it merely to exist at all, as is the case with the hypothetical bonus stories; I wanted the story to exist in the precise form that I wanted it to exist.

So it's not as simple as AI art always bad, manual work always good. It's a matter of precision vs. convenience. If you draw, write, or compose something, the finished product takes whatever form you want it to take. You usually want precision for heartfelt projects, obviously. But there's also some cases where you simply want something to exist and adhering to a very specific vision is not the goal. In that case, convenience is the goal, not precision, and at those times AI art is entirely okay.

Some of the people in the 2020s crusading against AI seem to believe that in order to have 'artist's pride' or whatever, you must be a Luddite that gatekeeps art against everyone except those who do things the 'proper' way. That is not even remotely true in any way whatsoever. If I'm to be perfectly honest, I almost certainly went through more to see my story through to completion than most of the artists online stirring up a shitstorm. I wrote a series of novels that in length trumps almost every other story ever written (some would say the story was too long and I needed an editor but fuck 'em, story was just for myself anyway), and I did it through chronic fatigue, through depression (both conditions showing up years before I even wrote the very first word), through burnout and writer's fatigue that I started experiencing four full years before I even finished the original series, with the sequel adding another two years (albeit after a five year vacation between the original and sequel series). Very few people would have pushed through the way that I did - that's evident by how many writing projects are dropped due to life obligations despite being infinitely shorter. I wrote all the way to the end despite the emotional and physical toll because I loved the story and wanted it to be told in full. I think about the story and characters frequently, and I always will no matter how long I live.

I have pride in my story, and by extension I suppose I have artist's pride (though I'd rather not use a term that cheesy). But that does not translate to a desire to be a gatekeeper. I can understand being against certain use cases of AI art... selling it, actual plagiarism (no, AI art is typically not plagiarism, it learns general patterns and rules the same way that humans do), job displacement, whatever... but being against it on a conceptual level and thinking that nobody should ever use it in any context is absolutely indefensible.

Generally speaking, I am always in favor of things being democratized. Of possibilities opening up and for more and more people gaining the ability to consume and access, or in this case create, things. Piss on artist's pride. Artist's pride is nowhere near as important to me (nor should it be to anyone) as the humanitarian joy of seeing abilities, possibilities, and forms of happiness and meaning reach as many people as they can.

Anyway...

There is a severe lack of understanding regarding AI. Some think it's incompetent and unreliable in every possible way; others think that it's competent and capable but has no potential benefits and is nothing but a tool for destruction. The former has already been addressed, and I would advise everyone to be skeptical of articles that amount to "lol AI sux and is so dum lol." Most beliefs about AI's supposedly underwhelming capabilities are not based upon actual reality, but half-truths, things that used to be true but no longer are, and things which were always complete myths even in the past.

There are very few techno-skeptics who have any appreciable understanding of the technology they have such strong feelings on. In the abstract, I get it. You don't read as much about things you hate as you do things that you enjoy. If every positive opinion and every negative opinion throughout history was collected and scored on its understanding and validity, the 'positive opinion' column would have a higher score - people explore things they like more in-depth. But the record needs to be corrected regardless since most people's understanding of AI is frozen in either September 2022 (when AI won the art contest) or December 2022 (ChatGPT debut).

Almost all technologies have debuted in an extremely primitive state. Telephones started with party lines, an idea that sounds absolutely atrocious by modern standards (and were likely a turn-off for many even at the time). 'Horror movies' of the 1890s were silent films under a minute long. 1980s PCs didn't have sound cards until the late decade and games required constant changing of floppy disks. Televisions had extremely fickle reception with people sometimes having to go onto the roof and hold the antenna straight. But AI, unlike every other technology throughout history, will never ever have its problems addressed (even though to a large extent it already has) and they will be forever immutable and unfixable because... reasons?

As for the idea that AI is purely destructive and cannot be used for good (this also addresses the idea of AI being incapable).
Is the fact that AI diagnoses complex medical conditions 4x more accurately than any medical professional (85% vs 20%) an inconsequential change? Is living in a world where no doctor tells obviously sick patients "It's all in your head" not one worth desiring?

Is the fact that kids who use AI tutors perform exceptionally well (In a Texas study, kids who used AI tutors scored higher than 98 percent of their peers; in a separate test, Nigerian children learned as much in six weeks as most do in two years) irrelevant? Is democratized, high-quality education for all, including those in the third world, not a good thing?

There's an at-home foot scanner which you stand on like a scale that detects developing heart failure by measuring fluid build-up in your feet; if I'm understanding correctly, that means heart attacks and strokes can be predicted in advance - anywhere between 3 and 19 days according to tests. Is making the number one cause of death vastly more predictable and thus treatable something that doesn't matter? Is making people much safer and guarding against the most common causes of sudden death not that big a deal?

What about the fact that the relatively much more primitive AI of 2021 reduced hospital mortality by 40 percent at the hospitals where it was piloted due to being able to detect sepsis well before it developed? Is reducing a very common cause of death by almost half rendered irrelevant because people post bad AI pictures on Pixiv?

Is the fact that AI has created a lifelong vaccine for the flu that protects against all strands not important? Is it not something that improves quality of life and basic safety for the human race?

How about the fact that AI-created medicines are about to reach the clinic and will be very cheap and affordable since AI makes the drug discovery process so much cheaper? Sounds like something worth celebrating to me.

Those are just a few examples of the good AI has already done. It will do an insane amount of good for healthcare during the remainder of the 20s and make life and future health dramatically safer (the AI foot scanner being an easy current example), but I won't get into that because people will probably just say the tired old bullshit about moon bases being predicted for 2000, the electricity too cheap to meter process from the 1950s, LOL where is my jetpack XD, etc - ignoring (or likely completely unaware) of the many 'optimist' predictions that were exactly right and the many 'grounded skeptic' predictions that were miserably wrong. I think people love reductive thinking more than they love anything else in the world.

Despite some stereotypes, out-of-touch Silicon Valley types aren't the only ones who advocate for AI. Though my own household while growing up was okay, I live in a generally poor and drug-ridden area - in the current day my county is around 200th place amongst the country's ~3100. The first time I played Final Fantasy VII as a 12 year old, Midgar was fascinating and felt deeply familiar because its populace - bitter, depressed druggies living in squalor - is similar to my local area. It's not a particularly violent place, I don't think, but certainly very dreary and gray. Seeing so many people with no hope due to their economic circumstances made me hope for technology to offer them stability and a better life than that which they'd ever known.

I want a world where all forms of suffering are escapable, and where nobody has to experience shame or low self-esteem. A large part of solving problems is knowing how to solve them. Since AI works thousands, millions, or billions of times faster than human beings (this has been confirmed numerous times in multiple areas the past three years and is not speculation), it can be leveraged to figuring out the solution to any problem. I know some would be difficult or impossible to implement (Middle East wars etc), but some/most wouldn't have much blocking them off.

And obviously I want some of the benefit for myself. As someone with severe chronic fatigue syndrome where mild exertion can leave me broken for days, Avoidant Personality Disorder plus CPTSD, and being unable to drive due to having my brains fried back in the day (stroke during my birth) or communicate well (I've had lifelong speech problems and auditory sensory gating issues so severe that I can't form a sentence in anything except silent environments; noises aren't louder or uncomfortable, it's more like a distorted phone call with a poor connection), I need some rescuing myself. I'm very pro-AI in large part because the promises of AI finally nearing commercial availability have helped me from much of the vulnerability, insecurity, and overwhelming daily anxiety I've known since 2014. Self-driving cars... AI discovering chronic fatigue treatments and accelerating the medical pipeline... AI-based hearing aids that improve sensory gating much more than normal hearing aids (not theoretical, already exist)... it's AI that will help me and bring good and safety and security to my life moreso than people ever have, unless you count the scientists who invented those things.

I know the day will come where I can go to sleep knowing that there's no more hurting people, or at least that any form of suffering can be relieved. And much of that reduced suffering will be the result of AI.

AI is indeed a job destroyer. It is, I suppose, bad for artists in some ways. But it's many other things. I understand why progressives hate it so much since so much attention has been drawn to its potential for increasing inequality, but it also has the potential to be the most democratizing technology of all time and reduce inequality more than any other.

You don't have to be as emphatic as me. But if you care about the person who looks in the mirror and thinks of themselves as ugly, then pray for AI to advance cosmetic treatments so much that everyone in the world can have the face they want and no longer have to feel self-hatred and shame. If you care about drug addicts, remember that AI has already been proven to analyze the brain thousands of times faster than human scientists and will find easy cures for addictions. If you care about the sick and suffering, be thankful that AI-developed medicines can be affordable without insurance. If you care about the poverty-stricken, be glad that AI can dramatically deflate the costs of many goods and services.

Democratize everything and eliminate as many forms of suffering as you can. AI will do that to whatever extent is physically possible, and that's why I will always support it.
Yeah. There are an exception amount of use cases and as someone(like you) who's been following the latest developments very closely, its improving at a rate that is almost unfathomable. Like we had GPT 3 about 3 years ago but compared to now that may as well be the difference between the IPhone 4 and the IPhone 15.

To be clear, I would not be cool with a studio using an actors likeness without their consent. Doesnt matter if they're dead or alive. In regards to the macro discussion around AI being used in art...I mean I get the apprehension. I feel the same emotions that many others here express about it, but I just cant justify it rationally. I dont think its that reasonable to say that because something can take someones job then we ought not to use that thing. You could stymie all technological innovation under that logic, job automation has been happening since the industrial revolution. The emergence of the internet was met with the same hostility because of how many jobs it took, the existence of these super complicated pocket computers we call Smart Phones were met with animosity as well given how many jobs that completely undermined as well. Machine automation in Construction took away many jobs etc. Its just that in the past, art was usually safe from those things. For a long time we always believed that art was this novel human concept that reflected human creativity and ingenuity but now...we're kind of dealing with an existential question of whether thats actually true and where that leaves us.

Personally, ill never intentionally consume art where AI is responsible for more than like 40% of the work flow but thats just because I'm very big on the creative process. I'm very interested in the lived experiences that led to some creation, and the talent level of the people involved. Ill watch documenaries of my favourite movies and TV series because I'm so invested in it.

That said, this is just my preference and the general public doesnt need to share my preference. Most people dont care as long as they enjoyed the end result. Honestly I feel like If I were to conduct an AI art turing test where I showed people here various photos and videos, I doubt many would notice unless I primed them before hand.

In regard to the state of the industry stuff, this has been a problem before AI. The main issue is that studio executives dont want to invest in projects that deviate from some industry approved formula that they know to be profitable. You can get rid of AI completely and nothing would change about this.In 2019 I wanted to watch "Phantom Thread" in theatres and I literally couldnt unless I left the city I was in. "The Irishman" which was made by arguably the most successful director ever(Martin Scorsese) was unable to secure a theatrical release because he refused to compromise artistically on certain things. That was the precursor to the whole "Marvel movies are not art" thing that gained support from both Coppola and Scorsese, that the film industry has become so dominated by capital interest/profitibaility which meant the only things that can get broad mainstream exposure are films that exist to satisfy those metrics. And so the art itself becomes nothing more than an amusement park ride. Now you could remove AI from the picture but all a studio will do is just find someone who will do exactly what they want for cheap.

User avatar
MCDaveG
Born 'n Bred Here
Posts: 5760
Joined: Fri Aug 05, 2005 5:54 pm
Location: Prague, Czechia
Contact:

Re: How would you feel about AI being used for deceased voice actors?

Post by MCDaveG » Mon Sep 22, 2025 7:54 am

Kunzait_83 wrote: Wed Jun 11, 2025 10:52 am
Jord wrote: Wed Jun 11, 2025 3:00 am In the end most people care about how the product sinds up. If AI can do a better Goku, only a minority will complain, while a majority enjoys a consistent good "performance".
Heck, Nozawa currently struggles with her Goku voice in a lot of projects right now. Her voice work in Kakarot and its DLC was very iffy. Daima was a bit better, but I don't think it will get much better.

In the end AI is just a tool to enhance a product. Heck, if AI animation gets better, we could get AI to draw in between frames. I'd rather have that than the choppy animation we saw in Super.

Repeating this for emphasis:
Kunzait_83 wrote: Wed Jun 04, 2025 10:08 amAt a certain point, I'd like people in this community to just... take five seconds out of their day to think beyond just "Me want more cool content" and "I'm going to think about this purely from the perspective of the CEO of a studio, even though I'm the furthest fucking thing from one myself, and pretend like I have any remote financial stake in the studio's bottom line, even though I don't and never will".

So long as we live in a capitalist society/economy, then jobs being lost in general is a BAD thing for average, ordinary people: you, person who is reading this, are overwhelmingly likely an average, ordinary person. You're not a billionaire head of a major film or TV studio, and you never will be. You're just some random schmuck, like me and everyone else here, who depends on a paycheck of some kind in order to live and have the basic, bare necessities for survival.

Without jobs, that check you depend on to survive goes away, and you can't afford basic things like food to eat, and a roof over your head to live in. This is how a lot of people become what we call "homeless".

Now that we got that basic concept out of the way, we'll move to something a bit more advanced: not everyone who works in Hollywood or the entertainment industry are super wealthy, famous celebrities. There are a LOT of average, ordinary-ass blue collar manual laborers who work in the film industry: as set designers, carpenters, lighting technicians, electricians, day players, costume seamstresses, etc.

These people are not the wealthy and glamorous Will Smiths or Sydney Sweeneys of Hollywood who make a gazillion dollars to just show up and say their lines for a few weeks at a time and then do press junkets before going back to their million dollar estates: these are average, ordinary-ass people, no different from you or I, who live in a shitty apartment or tiny house, barely make bills every month, and are just working to put food on the table week to week. The only difference is that they work for their shitty paycheck on a film set instead of an office cubicle: other than that however, there's no difference whatsoever between them and any random person you know in your life.

When jobs for ordinary people disappear, in any industry: that's bad for all of us average folks who aren't fabulously wealthy. We ALL depend on these paychecks to survive and not be homeless and starving.

The people who come out ahead and do better when these jobs disappear are not ordinary, average people like you or I: the people who profit from these jobs disappearing are fabulously wealthy CEOs and executives: which, once again, you and I are not and never will be.

Again, for the umpteenth time: YOU, PERSON READING THIS, YOU'RE NOT A WEALTHY STUDIO EXECUTIVE OR CEO, AND YOU NEVER, EVER WILL BE. Like everyone else who posts here, you were born some random nobody and you'll likely die some random nobody.

You're not special (in the economic and social power sense of the word, and not like as an individual human being), you're not rich and powerful, you never will be special/rich/powerful, and you're not a temporarily embarrassed billionaire entrepreneur-in-waiting. The same thing goes for your friends, family, and anyone/everyone else you care about in your life: all of them are average, not-rich, not-powerful, un-special nobodies too, just like you, me, and everyone else in this forum.

Vegas-odds-probability of course.

I'm very sorry to puncture your fantasy here, but allow me to formally and cordially invite you back to real life for a moment here.

So taking all this into account: when you find yourself rooting for and cheering on events to progress in such a way that only fabulously wealthy business executives and CEO's come out ahead while countless average, ordinary working people suffer immensely... ask yourself "Why is that? Why am I acting like this? Who or what am I invested in here? Who's side am I on here?"

Because the thing that you're rooting and cheering for is something that in all likelihood will only ever end up harming average, ordinary people who live paycheck to paycheck, like yourself and every other person you know in life.

Wake the fuck up and smell reality here guys. This isn't a video game, this isn't a Shonen manga or anime. This isn't a fucking internet meme. This isn't a Marvel movie or comic. This isn't some stupid RPG where numbers going higher is always good, no matter what. This is real life: where shit has serious, dire, life-threatening consequences for average, innocent people. Read something of substance about reality sometime soon, ASAP, and/or otherwise touch some grass.
And also adding that most cartoon voice actors - with insanely rare exceptions - are a LOT closer to your average working person than they are gazillionaire celebrities with power, status, and estate money at their disposal ala most A list Hollywood celebrities.

Your favorite anime voice actor, on average, has WAY more in common with your favorite uncle who's worked a regular job for 50+ years than they do Leonardo DiCaprio or Denzel Washington or Tom Cruise.



So again, repeating this for emphasis:

This isn't a video game. This isn't an anime or manga. This isn't an RPG where "numbers go higher is always better no matter what".

This is real life. These are real life human beings with real life and death stakes added to their jobs and paychecks. You're also not a billionaire CEO in charge of making obscene amounts of money from these properties. You're just some regular, average jerk, just like the rest of us here, who will also never be rich and never be a billionaire CEO in charge of one of these properties and making money off of it.

So I ask again: who's fucking side are you on here? This isn't an abstract, this isn't a hypothetical: this is real, serious shit here where the lives and livelihoods of millions and millions of people are at stake and on the line here.

Real life will also inevitably require you at times to drop the "I'm above it all and acting as an impartial witness to events disconnected from me" bullshit, realize that some of this shit DOES in fact directly concern and impact you (and/or people you know and ostensibly care about), and actually take a fucking side and a concrete position and point of view.

And that whether you fucking like or not, that you have an innate responsibility - as a thinking, active member of society - to actually put some serious thought and careful consideration into your concrete position and which side you ultimately pick.

And when you do ultimately pick a side, maybe try to not act so fucking shocked and chagrined when most normal people around you look at you and regard you like you're a fucking insane, delusional moron when you pick a side that obviously disfavors and disadvantages not only yourself, but moreover countless regular, average people you know and interact with regularly all around you, for completely inscrutable, asinine reasons.

The choice in this case is pretty pat and clear:

What's more important to you? That you have a dumb kiddie "pew pew" cartoon that's mildly, slightly "better quality" in your eyes? Or that countless working people - including among them countless voice actors who countless people in communities like this one claim that they love, adore, admire, and go out of their way to follow around at conventions and whatnot - do not lose their ability to keep a roof over their heads and put food on the table for themselves and their families?

And for the especially pedantic and dense among you, let me also emphasize that the blindingly obvious and glaring subtext undergirding this whole discussion here is obviously about a whole lot more than just voice acting: as AI, by this same thread of logic, inherently threatens vastly more jobs (and thus vastly more human lives, families, and livelihoods) within vastly more industries worldwide beyond just some silly cartoons and so forth.


I'd like to also now issue a serious challenge to everyone in this thread who both has a favorite anime voice actor who they idolize and follow around at conventions and are also saying simultaneously something along the lines of "well if AI can do a better job and can make more money for the IP, then I guess we all just have to accept it and move on":

I'd like every single one of you guys, at the next anime convention you attend where you're seeing your favorite VA do a panel and sign autographs for fans, to say the exact same lines of argument to them verbatim as you've said them here (about how AI is inevitable and we need to all just passively sit back and accept whatever job losses come as a result of them, as if this is a natural phenomenon like the weather, and not active decisions being made by other human beings first and foremost), and see how said favorite VA's react to it. And also how other fans in attendance also react to it as well.

And also if possible, video record the results of this and post it publicly online for the rest of us to witness. You know, for science.

Basically I'd like you guys to pull yourselves out of this completely senseless fucking daze you're seemingly stuck in - where you're just reacting to everything seemingly like this is all some kind of simulation or some shit - and come back to thinking about real life and real life human stakes for five seconds. Understand that other human beings who exist alongside you out there in the world - even ones that you can't see or talk to immediately - aren't "NPC's".

They're living, breathing, thinking, feeling human beings, just as complex and multifaceted as you or anyone you know, and that their lives, safety, and ability to earn a living and have a roof over their heads and eat food for sustenance is all JUST AS IMPORTANT and matters JUST AS MUCH TO THEM as it does to yours and your family and friends.

I'm asking you to understand that the needs of the overwhelming many - in this case, average working people like you, me, and anyone else you know or love or care about (if anyone) - vastly outweighs the needs of the few: which in this case are a relative handful of over-moneyed business executives who have so much goddamned money that they'll never, ever, ever have to work or worry about food or shelter for a single goddamned microsecond of the rest of their pampered-ass lives, nor will their children or their children's children.

I'm asking you also to question and interrogate yourself, to do some basic introspection:

"Why am I taking on and prioritizing the POV of a handful of disgustingly rich CEOs of these giant companies - that I'll never own or have any sliver of a stake in whatsoever - and putting them and their views and their interests above those of regular people like me, anyone else I know, and even my favorite anime personalities that I won't shut up about how much I idolize them?

Why am I taking on these positions of favoring the needs and wants of the powerful above those of the powerless as if their needs are inherently positive and as inevitable as the weather, when I myself am powerless, as is everyone else I know and ostensibly care about?

Doesn't me simping unthinkingly and unquestioningly for the greed and gluttony of a handful of wealthy billionaire suits above the needs of average people like myself, and even anime VA's I claim to love and look up to - doesn't that maybe make me kind of an utterly pathetic boot-licking, ass-kissing, dick-sucking sycophant and toady, no better and no different whatsoever than a medieval peasant bowing and scraping and groveling before the monarchy and nobility of ye olden days of yore?"


I'm often reminded in these sorts of discussions of that famous Upton Sinclair quote:
Upton Sinclair wrote:“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
For all the people mindlessly regurgitating these kinds of armchair CEO, pro-corporate profit platitudes in threads like this, as if they have any kind of a stake in these companies' quarterly profit margins:

You don't even work for these companies in ANY capacity whatsoever. Ergo, your salaries are NOT in any way dependent on you not grasping these simple, basic fundamentals of how real life works. Quite the contrary, as an average working person, it would GREATLY benefit your salary if you DID actually grasp the barebones basics of how power dynamics function in a worker-based society much better.

So what exactly is your excuse for being this fucking dense about how the working world works at this late fucking stage in your life?

You're NOT "secret business partners on the ground floor" with the bosses, and you never, ever will be. The bosses are not your peers and equals, and never will be - not now in the present, and not in some potential future.

The bosses, on average for the most part, generally hate the guts of all their own workers and employees (like you and your friends/family), look down on them/you, think and care only about themselves and their own greed, and are merely looking for the slightest excuse to get rid of all your jobs so that they no longer have to pay you or anyone else you know a salary of any kind ever again so that they can keep that much more of the money for themselves and have fewer people to share it with.

For more context on this dynamic, simply look up the words "greed" and "selfish" in the dictionary. "Drunk with power" is also a good resource.

Welcome to real life. You're not a future, soon-to-be rich power-broker in waiting, and you never will be. You're just another rando loser like the rest of us. So my unsolicited advice is for you to take the boss' collective cocks out of your mouths, nut the fuck up, grow a goddamned spine along with some self-respect and dignity, and start giving half of a shit about the lives of other average working people around you - because you're one of them, your families and friends' are among them, and your interests and fates are all intertwined and connected, and are also all at direct odds against the interests of big business and the corporate higher ups above you.
I agree with Kunzait_83... and we could've expanded this with conspiracies on how studios care or do not care for fans and how come they continuously do what they do... to maximize profit for minimal input as much as possible, so I'll keep it short with one sentence that resonates among us in creative industry and others where the AI envagelists go strong... It is ad absurdum, but it is a serious question, as these technocrats are focusing only on the tech and development itself and have no idea or care about the impact from sociological, enviromental and other standpoints:
"If you change all the people with AI, who is going to buy your products then?".

It's hard no to get to the extreme on both ends, but some people are able to say: "they can't do that, because and because...",
Sure, but at the same time, it's not an ancient history when similar people tried to sell you radioactive medicine or vegetables, or poison environment with pesticides or sell you refrigerator with CFC that eats away Ozone layer... they don't give a shit honestly.
Profit first and if it's in the margins of law, consequences later.

The funniest thing about conspiracy theories IMHO is that people can't believe that the reality is often so bleak and boring, while even worse at times, that it doesn't make a sense in contrast to outlandish theories... and that's in cases where people do not come up with conspiracies just to be original and have few seconds of fame with chain e-mails.

And one other thing, lot of people thing that companies have bulletproof strategies, all is well thought out with ton of back-up plans
while in reality, lot of the people there operate in bubbles and are unbelieavably stupid... there are well documented reasons for why that is and that's where I'll end to not go into wall of text that nobody would like to read :)
FighterZ, Street Fighter 6, Mortal Kombat: Funky_Strudel
PS5: Dynamixx88
Trust me, I'm millenial and a designer.

User avatar
funrush
I Live Here
Posts: 2062
Joined: Sat Oct 02, 2010 2:54 pm
Location: United States

Re: How would you feel about AI being used for deceased voice actors?

Post by funrush » Wed Sep 24, 2025 2:27 am

AI does not sound as expressive as actual VAs yet. Even setting aside the ethics aspect of the argument such as whether the original VA want to be AI'd or whether or not it's right to give a job to a robot instead of to a human, an impersonator would probably just sound better.

So how would I feel if say Masako Nozawa passed and they came out with an AI Masako Nozawa? I would be like this is weird, you couldn't find one actual person who sounds like Masako Nozawa?

Also to be honest AI was very fun novelty at first but now when I see AI used in art it makes me feel icky. There is something uncanny and soulless about it and I think if they made Goku be voiced by AI it would dampen my enthusiasm for the franchise.

User avatar
JulieYBM
Patreon Supporter
Posts: 18512
Joined: Mon Jan 16, 2006 10:25 pm
Location: 🏳️‍⚧️🍉

Re: How would you feel about AI being used for deceased voice actors?

Post by JulieYBM » Wed Sep 24, 2025 5:33 am

There's no human connection with AI. That's the point of the arts: connecting to another human being. We all eventually die, there is no stopping that and there is no reversing it with technology.

When an actor dies, you recast. You learn to connect to other human beings, just as you do for any other situation. One day, a person will no longer create new art and that is okay.
💙💜💖 She/Her 💙💜💖

User avatar
MCDaveG
Born 'n Bred Here
Posts: 5760
Joined: Fri Aug 05, 2005 5:54 pm
Location: Prague, Czechia
Contact:

Re: How would you feel about AI being used for deceased voice actors?

Post by MCDaveG » Thu Sep 25, 2025 9:01 am

JulieYBM wrote: Wed Sep 24, 2025 5:33 am There's no human connection with AI. That's the point of the arts: connecting to another human being. We all eventually die, there is no stopping that and there is no reversing it with technology.

When an actor dies, you recast. You learn to connect to other human beings, just as you do for any other situation. One day, a person will no longer create new art and that is okay.
This... I don't have problem with AI as a tool for people to help with tasks for people. For example, the AI usage in on-set deaging in movie industry is excellent example on how to get huge help, instead of spending hours compositing and all the manual work and instead of losing job, you can focus on the more creative aspects, but also to the actors themselves, as they can see the results in almost real time and can adjust their acting based on that reference or just check the prefinal results without waiting months for postproduction process.

But what I see in all that AI evangelium getting adopted by companies for all the wrong reasons, is loss of humanity.
Substituting human interaction and expression with automated AI is fundamentally wrong, be it helpdesk voice AI or whole generated art, or person.
AI is just a tool, not a substitute for human. It will never work that way and even if it will be, and sometimes is able to mimic human person flawlessly, it will still be fake from it's core. Why should ve invest our time into dishonesty and something that is not real by definition.

Those AI influencers are great example and pinnacle of the illusion that social media are.
Made for low-cost reasons just to help business by tricking people and fill the world with more AI slop.
FighterZ, Street Fighter 6, Mortal Kombat: Funky_Strudel
PS5: Dynamixx88
Trust me, I'm millenial and a designer.

User avatar
PhantomSaiyan
Beyond-the-Beyond Newbie
Posts: 316
Joined: Thu Jan 16, 2025 4:32 pm
Location: A Dark Future

Re: How would you feel about AI being used for deceased voice actors?

Post by PhantomSaiyan » Thu Sep 25, 2025 1:17 pm

AI should only be used to replace tedious shit that isn't fun, I don't understand this obsession in making AI do all of the fun artistic things

Post Reply