Archive for the ‘Videos’ Category
Arduino MIDI Synth Demo Preview (square + noise) [download]
Tuesday, October 30th, 2018Up to 15 notes at once on an Arduino using no timers! Well, the quality drops a lot as the number of playing notes increases, but still!
This is a demo of a MIDI synth I’m developing for the Arduino. Its sound is currently very basic – it has no concept of different instruments, can only produce square waves and noise, and each MIDI channel can only be at one of 3 different volume levels. It has no fixed sample rate, and is always producing a new sample as quickly as possible, which is slower when more notes play at once (in practise, the sample rate ranges from about 20 KHz down to about 6 KHz).
It supports pitch-bends, modulation, monophonic/polyphonic MIDI channel mode, and some percussive notes. It also recognises some sysex messages, including GM/GS/XG “reset” messages and GS/XG messages to set a MIDI channel’s percussion mode.
To use the code yourself (hardware info):
If you want the Arduino to accept MIDI data from “real” MIDI hardware (through a MIDI socket), you’ll need to build a circuit with an optocoupler and connect that to the Arduino’s serial RX port, and change #define UseRealMIDIPort False
to #define UseRealMIDIPort True
(this affects the baud rate used). Due to laziness, while testing, I used a program called “Hairless MIDI<->Serial Bridge” and the virtual MIDI cable driver “MIDI Yoke” to send MIDI data straight over the Arduino’s USB serial connection, instead of building the proper circuit.
The code controls one “port” on the Arduino (a group of 8 pins determined by the specific Arduino board model), which connects to an 8-bit DAC (a simple R2R resistor ladder) to give an 8-bit audio output. I’m using port C on the Arduino Mega, because that neatly corresponds to digital pins 37 (LSB) to 30 (MSB), but it may work on other Arduino boards as long as there is a port where all 8 bits are mapped to digital pins, with minimal changes to the code. The output port (PORTAudio
and DDRAudio
) would need changing to one consisting of 8 usable pins, and the maximum number of playing notes at once (NumSoundChans
) could either be reduced (will save CPU time and memory) or, in the case of the Arduino Due, increased.
You can download the code for the current version here (13.2 KB). You will also need the Fast Division library (info). Note that the code includes most of the above hardware info in the form of comments. =)
P.S. The MIDI in the video is being played on MIDITester. I did not make the MIDI, and I don’t know who did. Please, people, at least credit yourself in the metadata ;_;
Testing different wave tables for Arduino MIDI synth
Monday, October 29th, 2018I’m working on an Arduino MIDI synth, and just tonight, I tried to add support for complex wave shapes (previously, it was only square waves and noise). Since I’ve now got enough working to be able to listen to these tiny (8-sample) lookup tables for different waveforms, I thought I’d make this video to show what they sound like. =)
(Also, I finally found a good use for block Unicode characters!)
Dojikko v2
Sunday, July 8th, 2018I’ve given my little robot a huge upgrade – she can now see the world properly! This video is just an introduction, and there’ll be a proper demonstration of her path-following abilities later.
Her brain is now a Raspberry Pi instead of an Arduino, and she sees with an infrared camera (for better low-light performance) in greyscale, instead of just measuring the distance in front of her. This means she can now have a proper goal – instead of just moving towards walls and then turning, she can now drive along a path!
She uses a neural network to judge how quickly she should be driving and how to steer. Although she only sees at 128×64 resolution, this is a huge improvement! Currently, I’m still in the process of training her well (driving along paths with her recording the view and the controls that I’m giving her).
In a future video, I will also go into details of the circuitry, including the way that the Raspberry Pi can hold its own power on and only turn it off once it’s finished shutting down, because the only explanations for how to do this that I could find online required a ridiculous number of components and constantly leaked small amounts of power when turned off, which this way does not. Plus, this way only requires a relay, transistor and resistor.
Please forgive the inverted colours of the subtitles!
I only noticed this after I had subtitled the entire video, and there’s no easy way to batch-change this in the video editor. I tried using a hex editor to find/replace the colours, but to no avail… orz
I could pretend that it’s a throw-back to the time when I used the colours this way, but it was actually a mistake.
Gyroscope MIDI Controller
Tuesday, January 23rd, 2018I made a program to send pitch-bend messages to Bawami (my MIDI synth) based on the strongest reading out of the X/Y/Z axes of the gyroscope on the GY-87 sensor board, via an Arduino. Gently moving the sensor makes for a really natural-feeling control for vibrato, allowing really subtle (or not-so-subtle) pitch changes.
I was able to get readings from the board to Windows at a stable speed of 400 Hz, but to avoid spamming too many MIDI messages (a problem if sending them outside the computer to some hardware synth), the pitch-bends are “only” being sent at 100 Hz. =P
The GY-87 also has X/Y/Z accelerometers, but these were way too sensitive to orientation to be convenient to use as a controller. Gravity is always pulling down on one axis, so if you tilt the sensor then it massively overwhelms the readings that you actually want (the ones caused by moving the sensor around). The best use I could get from them was tracking the maximum difference between 2 points in time and sending that as a MIDI message, which basically just made it respond to vibrations (and only made positive numbers). The gyros naturally only detect changes, so the readings centre around 0 and go negative when turning in one direction and positive in the other, ideal for vibrato.
Testing MMSSTV with messed-up signals
Tuesday, October 31st, 2017I applied a couple of strong vibratos to an SSTV signal (a picture encoded as one long sound) just to see what effect the unstable frequency would have when decoded using MMSSTV. Amazingly, it was still able to detect the signal and start decoding, but of course, it looks too scrambled to make out. I like how the artifacts look, though.
I’m using Virtual Audio Cable to connect MMSSTV (encoder/decoder) with Audacity (which I used to apply the excessive vibratos), and Audio Repeater to “echo” the sound from the virtual cable to the speakers, so I can hear it live (and capture it in the video). Audio Repeater introduces about half a second of delay, though.
SSTV (slow-scan television) is a way of transmitting pictures over the air when you have very little bandwidth available (around 2.6 KHz, vs several MHz for ordinary analogue TV), sometimes used by amateur radio operators. It works by modulating the frequency of a sine wave according to the brightness of the pixels (per colour channel) row-by-row, so by applying a vibrato to the sound, the sound is pulled into and out of phase (but still stays in-phase on average). In other words, the rows are being shifted left/right (each colour channel independently). That’s why the image is rough along the vertical edges instead of being a nice straight line – sometimes, each colour channel is being pulled out of phase and dragged to the right, and sometimes it’s being pulled to the left (which causes it to wrap back to the right with inverted colours, because it’s interrupting the time slot that was dedicated to a different colour channel). Fun stuff to mess around with!
Neural Network Tries to Generate English Speech (RNN/LSTM)
Saturday, December 24th, 2016By popular demand, I threw my own voice into a neural network (3 times) and got it to recreate what it had learned along the way!
This is 3 different recurrent neural networks (LSTM type) trying to find patterns in raw audio and reproduce them as well as they can. The networks are quite small considering the complexity of the data. I recorded 3 different vocal sessions as training data for the network, trying to get more impressive results out of the network each time. The audio is 8-bit and a low sample rate because sound files get very big very quickly, making the training of the network take a very long time. Well over 300 hours of training in total went into the experiments with my voice that led to this video.
The graphs are created from log files made during training, and show the progress that it was making leading up to immediately before the audio that you hear at every point in the video. Their scrolling speeds up at points where I only show a short sample of the sound, because I wanted to dedicated more time to the more impressive parts. I included a lot of information in the video itself where it’s relevant (and at the end), especially details about each of the 3 neural networks at the beginning of each of the 3 sections, so please be sure to check that if you’d like more details.
I’m less happy with the results this time around than in my last RNN+voice video, because I’ve experimented much less with my own voice than I have with higher-pitched voices from various games and haven’t found the ideal combination of settings yet. That’s because I don’t really want to hear the sound of my own voice, but so many people commented on my old video that they wanted to hear a neural network trained on a male English voice, so here we are now! Also, learning from a low-pitched voice is not as easy as with a high-pitched voice, for reasons explained in the first part of the video (basically, the most fundamental patterns are longer with a low-pitched voice).
The neural network software is the open-source “torch-rnn“, although that is only designed to learn from plain text. Frankly, I’m still amazed at what a good job it does of learning from raw audio, with many overlapping patterns over longer timeframes than text. I made a program (explained here, and available for download here) that substitutes raw bytes in any file (e.g. audio) for valid UTF-8 text characters and torch-rnn happily learned from it. My program also substituted torch-rnn’s generated text back into raw bytes to get audio again. I do not understand the mathematics and low-level algorithms that go make a neural network work, and I cannot program my own, so please check the code and .md files at torch-rnn’s Github page for details. Also, torch-rnn is actually a more-efficient fork of an earlier software called char-rnn, whose project page also has a lot of useful information.
I will probably soon release the program that I wrote to create the line graphs from CSV files. It can make images up to 16383 pixels wide/tall with customisable colours, from CSV files with hundreds of thousands of lines, in a few seconds. All free software I could find failed hideously at this (e.g. OpenOffice Calc took over a minute to refresh the screen with only a fraction of that many lines, during which time it stopped responding; the lines overlapped in an ugly way that meant you couldn’t even see the average value; and “exporting” graphs is limited to pressing Print Screen, so you’re limited to the width of your screen… really?).
Noob Pancakes
Sunday, September 25th, 2016This isn’t going to become a thing on this channel – I was just hungry and wanted to record it… I think I should stick to computer stuff. If I hadn’t put any effort into editing this, I would’ve put this on my other channel.
CrowdSound Retro Rock-ish Remix
Monday, September 19th, 2016I started playing around using the melody and chord progression that a huge number of people created together at CrowdSound, and ended up making this little arrangement for Bawami, my MIDI synth. It took a few hours over 3 nights.
CrowdSound is a site where people were given a chord progression and song structure, and were then allowed to vote note-by-note to make a melody. It’s an experiment to see if lots of people can work together to gradually make an entire song by voting on many tiny additions. Since people are making remixes already, I decided I’d try, too.
As of the 15th of August 2016, only the melody is complete, so I imported the MIDI of the melody (from here) into Sekaiju (the MIDI editor I use). From there, based on the chord progression, I made tracks for bass, percussion, overdriven and acoustic guitar parts, 2-part pad and a portamento synth sequence to liven things up a bit. Then I decided on how I’d switch between the various backing parts so they weren’t all fighting for the spotlight at the same time. After that, I changed the velocities of all the melody notes (since I’m using a velocity-sensitive lead instrument on Bawami), to make it sound less annoying and repetitive and to complement the beat. I also shortened some long notes (which is within CrowdSound’s rules for arranging) to let the lead stop for breath every now and then, added modulation (vibrato) sparingly, and decided to somtimes pitch-bend from one note to another during the conclusion instead of instantly jumping (I think this should be allowed, because a real human voice would have to do this all the time =P).
In keeping with the openness of CrowdSound, you can download my MIDI (designed to be played on Bawami rev.132 or later) here. It uses several GS “variation” instruments, so it will sound worse on GM synths. It also uses an instrument (12-string Guitar) which is not present in Bawami rev.131, the currently-released version, but it should still sound fine on that version (it’ll fall back to the “Acoustic Guitar (Steel)” instrument). That, along with many other changes, will be in the next version I release!
This MIDI is playing on BaWaMI, which is a freeware, retro-sounding MIDI synth that uses subtractive synthesis. I’ve been working on it every now and then since 2010. You can find out more (and grab the latest version) here (click its title to get to the download page).
The 3D scrolling view of notes is MIDITrail.
BaWaMI struggles to play Arecibo by TheSuperMarioBros2 [Black MIDI]
Wednesday, September 14th, 2016Here’s my MIDI software synth Bawami doing its best to even keep responding while trying to play TheSuperMarioBros2‘s black MIDI “Arecibo“. The left view shows how it’s processing every MIDI message. Not shown: About 5 minutes of Bawami loading the 12MB MIDI file hideously inefficiently (tempo changes make it even worse).
This problem of my player stopping responding when maxed out is something I need to (re-)fix. I fixed this a long time ago (probably before releasing Bawami), but broke it again afterwards somehow, also a long time ago now… As always, the most recent version of Bawami can be download here (also check the most recently tagged posts to see recent changes).
TheSuperMarioBros2 have made a lot of great black MIDIs that are often fun to stress-test MIDI players with. You can see lots playing at their channel (they also provide download links for the MIDI files). However, Bawami’s loading of MIDIs is inefficient, so I’d recommend not trying to torture it with black MIDIs too much. I also suggest unticking “Loop” so that, if it stops responding during playback, it’ll eventually start responding again at the end.
Behind the goggles
Wednesday, September 14th, 2016This explains a lot (maybe).