Posts Tagged ‘voice’
Neural Network Tries to Generate English Speech (RNN/LSTM)
Saturday, December 24th, 2016By popular demand, I threw my own voice into a neural network (3 times) and got it to recreate what it had learned along the way!
This is 3 different recurrent neural networks (LSTM type) trying to find patterns in raw audio and reproduce them as well as they can. The networks are quite small considering the complexity of the data. I recorded 3 different vocal sessions as training data for the network, trying to get more impressive results out of the network each time. The audio is 8-bit and a low sample rate because sound files get very big very quickly, making the training of the network take a very long time. Well over 300 hours of training in total went into the experiments with my voice that led to this video.
The graphs are created from log files made during training, and show the progress that it was making leading up to immediately before the audio that you hear at every point in the video. Their scrolling speeds up at points where I only show a short sample of the sound, because I wanted to dedicated more time to the more impressive parts. I included a lot of information in the video itself where it’s relevant (and at the end), especially details about each of the 3 neural networks at the beginning of each of the 3 sections, so please be sure to check that if you’d like more details.
I’m less happy with the results this time around than in my last RNN+voice video, because I’ve experimented much less with my own voice than I have with higher-pitched voices from various games and haven’t found the ideal combination of settings yet. That’s because I don’t really want to hear the sound of my own voice, but so many people commented on my old video that they wanted to hear a neural network trained on a male English voice, so here we are now! Also, learning from a low-pitched voice is not as easy as with a high-pitched voice, for reasons explained in the first part of the video (basically, the most fundamental patterns are longer with a low-pitched voice).
The neural network software is the open-source “torch-rnn“, although that is only designed to learn from plain text. Frankly, I’m still amazed at what a good job it does of learning from raw audio, with many overlapping patterns over longer timeframes than text. I made a program (explained here, and available for download here) that substitutes raw bytes in any file (e.g. audio) for valid UTF-8 text characters and torch-rnn happily learned from it. My program also substituted torch-rnn’s generated text back into raw bytes to get audio again. I do not understand the mathematics and low-level algorithms that go make a neural network work, and I cannot program my own, so please check the code and .md files at torch-rnn’s Github page for details. Also, torch-rnn is actually a more-efficient fork of an earlier software called char-rnn, whose project page also has a lot of useful information.
I will probably soon release the program that I wrote to create the line graphs from CSV files. It can make images up to 16383 pixels wide/tall with customisable colours, from CSV files with hundreds of thousands of lines, in a few seconds. All free software I could find failed hideously at this (e.g. OpenOffice Calc took over a minute to refresh the screen with only a fraction of that many lines, during which time it stopped responding; the lines overlapped in an ugly way that meant you couldn’t even see the average value; and “exporting” graphs is limited to pressing Print Screen, so you’re limited to the width of your screen… really?).
Neural Network Learns to Generate Voice (RNN/LSTM)
Tuesday, May 24th, 2016This is what happens when you throw raw audio (which happens to be a cute voice) into a neural network and then tell it to spit out what it’s learned. (WARNING: Although I decreased the volume and there’s visual indication of what sound is to come, please don’t have your volume too high.)
This is a recurrent neural network (LSTM type) with 3 layers of 680 neurons each, trying to find patterns in audio and reproduce them as well as it can. It’s not a particularly big network considering the complexity and size of the data, mostly due to computing constraints, which makes me even more impressed with what it managed to do.
The audio that the network was learning from is voice actress Kanematsu Yuka voicing Hinata from Pure Pure. I used 11025 Hz, 8-bit audio because sound files get big quickly, at least compared to text files – 10 minutes already runs to 6.29MB, while that much plain text would take weeks or months for a human to read.
I was using the program “torch-rnn“, which is actually designed to learn from and generate plain text. I wrote a program that converts any data into UTF-8 text and vice-versa, and to my excitement, torch-rnn happily processed that text as if there was nothing unusual. I did this because I don’t know where to begin coding my own neural network program, but this workaround has some annoying restraints. E.g. torch-rnn doesn’t like to output more than about 300KB of data, hence all generated sounds being only ~27 seconds long.
It took roughly 29 hours to train the network to ~35 epochs (74,000 iterations) and over 12 hours to generate the samples (output audio). These times are quite approximate as the same server was both training and sampling (from past network “checkpoints”) at the same time, which slowed it down. Huge thanks go to Melan for letting me use his server for this fun project! Let’s try a bigger network next time, if you can stand waiting an hour for 27 seconds of potentially-useless audio. xD
I feel that my target audience couldn’t possibly get any smaller than it is right now…
EDIT: Because I’ve been asked a lot, the settings I used for training were: rnn_size: 680, num_layers: 3, wordvec_size: 110. Also, here are some graphs showing losses during training (click to see full-size versions):
Training loss (at every iteration) (linear time scale)
Training loss (at every iteration) (logarithmic time scale)
Validation loss (at every checkpoint, i.e. 1000th iteration) (linear time scale)
Validation loss (at every checkpoint, i.e. 1000th iteration) (logarithmic time scale)
For sampling, I simply used torch-rnn’s default settings (which is a temperature of 1), specifying only the checkpoint and length and redirecting it to a file. For training an RNN on voice in this way, I think the most important aspect is how “clear” the audio is, i.e. how obvious patterns are against noise, plus the fact that it’s 8-bit so it only has to learn from 256 unique symbols. This relatively sharp-sounding voice is very close to a filtered sawtooth signal, compared to other voices which are more breathy/noisy (the difference is even visible to human eyes just by looking at the waveform), so I think it had an easier time learning this voice than it would some others. There’s also the simple fact that, because the voice is high-pitched, the lengths of the patterns that it needs to learn are shorter.
EDIT 2: I have been asked several times about my binary-to-UTF-8 program. The program basically substitutes any raw byte value for a valid UTF-8 encoding of a character. So after conversion, there’ll be a maximum of 256 unique UTF-8 characters. I threw the program together in VB6, so it will only run on Windows. However, I rewrote all the important code in a C++-like pseudocode here. Also, here is an English explanation of how my binary-to-UTF-8 program works.
EDIT 3: I have released my BinToUTF8 Windows program! Please see this post.
“PONPONPON”, sings MISAKA.
Wednesday, January 25th, 2012I tried to create a voice based on samples of Misaka Imouto from the anime “A Certain Magical Index“. Then, I decided to make it sing the song that was stuck in my head at the moment, which happened to be Kyary Pamyu Pamyu‘s “PONPONPON” (I didn’t want to try to make such a crazy music video as the original, though). The key is lowered to suit the voice better.
Massive thanks to DelTiger(でるたいがー) for the off-vocal version that I used here. The illustration is by tachi008.
I actually used samples from the PSP game rather than the anime, for clean recordings (so that there wasn’t music or sound effects in the background). It seems to be a long time since I’ve uploaded a video with a CG voice.
I first made a MIDI of the notes the voice sings in the original version using Anvil Studio. The voice was made with Virtual Singer (part of Melody Assistant). Screen-capturing was done with VirtualDub and the video edited in Magix Movie Edit, while the audio editing was done in Audacity.
Unlocked Girl (Robbi-985 Remix) ~ 6 Voice Style [originally by IOSYS]
Wednesday, January 30th, 2008Sorry about my large period of inactiveness for the past week! >_<
Here’s an updated version of my remix of Unlocked Girl by IOSYS! I called it “6 Voice Style”, since now, instead of just the 1 electronic voice, there are now 6 voices used, including a backing harmonizing voice (Miku Hatsune)! The other 5 voices are made using RealSinger on Virtual Singer, part of Melody Assistant. There’s also now a much cooler-sounding Distortion Guitar used during choruses.
Here’s the YouTube video, this time featuring my attempt at a full translation of all the lyrics, and 10% of your RDA of randomness near the end. Of course, this is in mono and at low quality. This time, I think I avoided typos and forgetting things in the credits. ><
The actual YouTube page is here - there’ll be comments there. ;P Of course, I don’t mind if you leave comments here at my blog either! ;)You can download the high-quality stereo MP3 of this “6 Voice Style” [6.48 MB] (or if that link doesn’t work, you can try this link, although it will be much slower, sorry). I’m also putting up the original MIDI for download [64 KB] which I made for this remix, which is at the heart of this MP3. Please note that the MIDI is not complete. Sorry about that. But it’s certainly better than nothing (well, probably). It does contain all parts of the song, by which I mean that just by copy-and-pasting things you can complete the song. But I didn’t do that to the MIDI – just to the final MP3′s audio. That’s why the MP3′s complete and the MIDI isn’t – I’m lazy. ;P