I prefer proper VA dubbing versus AI dubbing. This is more of a question if the viewership prefers AI dub vs. no dub if proper dubs are not yet available.
bruv every anime listed in MAL has runtimes, lengths, and additional metadata.
Fair, I don’t have the time right now to be 100% accurate at my figures, so I went with a rough estimate. I tried to be as clear as I can on that point.
there are few N1 localizers willing to work at the 硬貨 on the دينار for whatever AMZN is valued at
If it’s only easy for fandubs to be readily available. Like there are few N1 localizers willing to work at Amazon’s assessed rates, few IP holders are willing to say yes to dubbing their shows at the money fandubbers can afford. There’s also hiring the proper voice actors for the characters, and the ones who do the anime justice/will not have the fans crucifying the anime for doing a craptastic job at dubbing deservedly ask a premium.
According to the beta testers, and the Internet, listeners abhorred the LLM localization & actual tone-deaf Speech audio dubbing. Keeping the original dubbings is simply what folks want, esp. if it’s labeled abridged.
[components of dubbing]
At the least you are aware why this /c/ prefers subs, because it is that much cheaper and errorless to output.
According to the beta testers, and the Internet, listeners abhorred the LLM localization & actual tone-deaf Speech audio dubbing. Keeping the original dubbings is simply what folks want, esp. if it’s labeled abridged.
Yes, at its current state. Will it stay that way? The tech companies are burning cash in attempts to make it not so. My hunch says even Vocaloid-tier AI dubbing will be enough for a large sector of the audience. Then the human vs. AI dubbing debate could be analogous to debates between lossy (more accessible) vs. lossless (higher quality) audio.
Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.
The only thing funny about mentioning Vocaloid is the fact that Vocaloid synthesis has to be manually pitched, tempod, and toned🤣. Glad you honestly believe capitalists want to invest more on disqualifying tone deafening pitchless speech waveforms.
It’s amusing to me how long people have been saying “yes, AI is crap, but it might not be crap some day, so just you wait!” Despite all the money tech companies have thrown at AI, it’s still as crap as it ever was, and I don’t see any reason to think it’ll get better.
Meanwhile, Crunchyroll doesn’t care if it’s crap, so long as they can get around the cost of paying humans (which is another can of worms). If they’re willing to buy this level of quality, what incentive is there for quality to improve?
yes, AI is crap, but it might not be crap some day, so just you wait!”
I mean, there’s a gap between the capabilities of Cleverbot and ChatGPT, as referenced in this very comments section. As much as one wishes it not be so, it would be foolish to ignore past technological leaps—and how people back then laugh them off as impossible.
I don’t see any significant differences between ChatGPT and Cleverbot, if I’m honest. It might have a wider array of responses to pick between, but it’s still making the same mistakes.
It would be foolish to ignore past tech bubbles, and how people back then claimed they’d fix all their problems in the near future and you need to jump on now or you won’t survive (and how none of them survived).
Unlike Cleverbot, you can add your project-specific context in ChatGPT. That was extremely helpful in my creative writing process as I use it as a virtual assistant.
It would be foolish to ignore past tech bubbles, and how people back then claimed they’d fix all their problems in the near future and you need to jump on now or you won’t survive (and how none of them survived).
While largely true, that none of them survived is false. Amazon is a survivor of the dotcom bubble. Pets.com died, but Chewy perfected the concept later on. Circling back to the topic, if/when the bubble bursts, we could be talking about 90% of the AI-centric companies going under, give a decade or so, a “stabilized” form of AI dubbing could resurface and establish a long-lasting presence.
Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.
There’s no chance it’s happening any time soon. Many manga and anime lean heavily on visual context as well as the context of the story in general to clear up situations where the language would otherwise be ambiguous, so until the translation software can also use all of that context it’s basically impossible.
I prefer proper VA dubbing versus AI dubbing. This is more of a question if the viewership prefers AI dub vs. no dub if proper dubs are not yet available.
Fair, I don’t have the time right now to be 100% accurate at my figures, so I went with a rough estimate. I tried to be as clear as I can on that point.
If it’s only easy for fandubs to be readily available. Like there are few N1 localizers willing to work at Amazon’s assessed rates, few IP holders are willing to say yes to dubbing their shows at the money fandubbers can afford. There’s also hiring the proper voice actors for the characters, and the ones who do the anime justice/will not have the fans crucifying the anime for doing a craptastic job at dubbing deservedly ask a premium.
According to the beta testers, and the Internet, listeners abhorred the LLM localization & actual tone-deaf Speech audio dubbing. Keeping the original dubbings is simply what folks want, esp. if it’s labeled abridged.
At the least you are aware why this /c/ prefers subs, because it is that much cheaper and errorless to output.
Yes, at its current state. Will it stay that way? The tech companies are burning cash in attempts to make it not so. My hunch says even Vocaloid-tier AI dubbing will be enough for a large sector of the audience. Then the human vs. AI dubbing debate could be analogous to debates between lossy (more accessible) vs. lossless (higher quality) audio.
Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.
The only thing funny about mentioning Vocaloid is the fact that Vocaloid synthesis has to be manually pitched, tempod, and toned🤣. Glad you honestly believe capitalists want to invest more on disqualifying tone deafening pitchless speech waveforms.
But please, never stop supporting espeak!
espeaks looks pretty cool. Thanks for sharing.
It’s amusing to me how long people have been saying “yes, AI is crap, but it might not be crap some day, so just you wait!” Despite all the money tech companies have thrown at AI, it’s still as crap as it ever was, and I don’t see any reason to think it’ll get better.
Meanwhile, Crunchyroll doesn’t care if it’s crap, so long as they can get around the cost of paying humans (which is another can of worms). If they’re willing to buy this level of quality, what incentive is there for quality to improve?
I mean, there’s a gap between the capabilities of Cleverbot and ChatGPT, as referenced in this very comments section. As much as one wishes it not be so, it would be foolish to ignore past technological leaps—and how people back then laugh them off as impossible.
I don’t see any significant differences between ChatGPT and Cleverbot, if I’m honest. It might have a wider array of responses to pick between, but it’s still making the same mistakes.
It would be foolish to ignore past tech bubbles, and how people back then claimed they’d fix all their problems in the near future and you need to jump on now or you won’t survive (and how none of them survived).
Unlike Cleverbot, you can add your project-specific context in ChatGPT. That was extremely helpful in my creative writing process as I use it as a virtual assistant.
While largely true, that none of them survived is false. Amazon is a survivor of the dotcom bubble. Pets.com died, but Chewy perfected the concept later on. Circling back to the topic, if/when the bubble bursts, we could be talking about 90% of the AI-centric companies going under, give a decade or so, a “stabilized” form of AI dubbing could resurface and establish a long-lasting presence.
deleted by creator
There’s no chance it’s happening any time soon. Many manga and anime lean heavily on visual context as well as the context of the story in general to clear up situations where the language would otherwise be ambiguous, so until the translation software can also use all of that context it’s basically impossible.
deleted by creator