WaveNet: Seiyuu will be replaced by computers

deepmind.com/blog/wavenet-generative-model-raw-audio/

>This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%.

>We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Other urls found in this thread:

deepmind.com/blog/wavenet-generative-model-raw-audio/
storage.googleapis.com/deepmind-media/pixie/knowing-what-to-say/first-list/speaker-4.wav
youtu.be/QEjdiE0AoCU
youtu.be/7Pq-S557XQU
twitter.com/SFWRedditGifs

Eventually, yes. But not for the foreseeable future.

The waifu age is dawning

>no sample
>just blah blah blah
Back to my IA songs.

It's already better than Vocaloid and can consecutively generate music procedurally.

Miku killer?

More like better vocaloids.

Oh yeah, like Miku was going to replace human singers

Not relevant to real life for 20 years or more.

>Those piano samples
Neat, so how long before the software made it into the vocaloids?

As long as it doesn't sound as awful as vocalshit, I'm sort of okay with that happening.

>deepmind.com/blog/wavenet-generative-model-raw-audio/

The babble created by the system with no text samples are amazing: storage.googleapis.com/deepmind-media/pixie/knowing-what-to-say/first-list/speaker-4.wav

RIP Miku you had a good run

This is some kind of scam?

Yes the quality is close to human sound but the point is seiyuu is a huge part of anime/J-game industry financially. It's idolized already.

You can create another Hatsune Miku sure but it's just not comparable to the whole seiyuu industry money wise.

So large companies won't use it but the real benefit of such technology is that individuals and amateur animation group can create animation easier without worrying about having to get shitty cheap voice actors anymore.

>Seiyuu will be replaced by computers
It doesn't seem you know much of the Japanese VA industry.

Cool idea. Wake me up in 2 decades when it matters.

>So large companies won't use it but the real benefit of such technology is that individuals and amateur animation group can create animation easier without worrying about having to get shitty cheap voice actors anymore.

It's more likely that it will take off in video games first. Realistic voice synthesis would allow developers to produce games with a whole lot more dialog.

I look forward to it, hopefully they'll use it for actual robot characters

It could actually be a boon for western dubbing.

Imagine your favorite animu characters speaking in English, but still sounding like the original nihonjin voice actors.

But that's probably still a thing for years, perhaps even decades into the future.

I'm ready for the computerized fandubs of the future

>Imagine your favorite animu characters speaking in English, but still sounding like the original nihonjin voice actors.

Even better, imagine the dubbed voices not sounding like total crap? Robots could hardly be worse.

THUGGERY

GLorious
WAIFU
AGE!!!!!!!!

The piano part was not that good.

>Computers playing the piano
Spooky

Will it sing daisy bell?

>Seiyuu will be replaced by computers
They said the same thing about Vocaloids.

Vocaloids will be replaced by seiyuus.

But user, vocaloids are based on seiyuus.

Shocking revelation.

>no sample

Must be hard being fucking retarded.

Cthuko shall live again.

In twenty years we'll have perfect copies of voice actors able to say whatever you want in whatever language you want and Sup Forums will still prefer subtitles.

Yes. The future of eroge is glorious.

I doubt it. Most anime producers aren't interested in fucking a computer.

So does it mean I will have voiced waifu AI in the future?

>perfect copies
You might have learned how languages work by then though.

Yes, but not just that. It will be your waifu's voice specifically, since the network will have learned how to sound just like her.

It's not right.

Neat.
>tfw your waifu will wake you up
>tfw you can freely have a communication with her
Just hope we will get artificial-personality technology soon and it would be perfect.

Make it train more piano song samples and it'll get good user.

youtu.be/QEjdiE0AoCU
youtu.be/7Pq-S557XQU

Vocaloids are still using human VAs for their voice providers though.

Especially during concerts where the VAs do it direct.

In the end, nothing change.

Does this mean seiyuufags will die? Count me in!