The Linux aplay command of ALSA is pretty cool as it can be used as a tool to play any kind of raw data as sound. This data can be piped into the standard input of the aplay command, or a file can be passed as a positional argument. Any kind of data can be used as sample data, but to really start using aplay by one way or another it would be best to fine ways to generate sample data.
Any kind of data can be piped into the standard input of aplay, this includes random data generated by making use of cat, and dev random. When doing something like this I can use the -d option of aplay to limit the amount of time that this will play to 3 seconds.
I can set -d option to 0, or just not give any -d option at all sense that is the default which will result in aplay just continuing to play random data as sound until I do a control+c or end the process by whatever means.
By default the format that is used is U8 which is unsigned 8 bit audio. There are a number of other formats that often can be used, however the full list of formats will change from one sound card to another. For the most part thus far I have just been sticking to unsigned 8 bit mono audio
The sample rate can also be adjusted by way of the -r option. The main page says that valid values are 2000 through 192000 Hertz, and the default as we can see is 8000 Hertz. The man page also says that if I give a value less than 300 that will be used as a kilohertz which would be that I should be able to give a value like 41 for for 41000 Hertz.
So far I have been piping in data to aplay by way of using the linux cat command with the use of /dev/random. However another major command I could pipe the random data into would be the dd command which has the -bs option that can be used to set, say 8000 bytes blocks, and I can do 60 blocks. This would then result in 60 seconds worth of audio data if it is still 8 Bit Sample Size, and 8000 hertz. However when doing so I will want to make sure that I set the iflag option to fullblock else the resulting file will end up being a bit light because of a partial read error.
If I am to use nodejs to create some sample data I will want to use process.stdout.write over that of console.log so that I have control over having an End Of Line or not in the output. I then also make use of a buffer as a way to create an output mode that will be the binary data rather than text data that I will want to then pipe or redirect into aplay. I have found that thus far I mostly want to redirect into a file and then use aplay until I figure out how to address an issue with data that is lost when piping. More on that when we get into the actually usage in the bash prompt.
One way to get started with this is to just pipe the data that this script generates with default settings into aplay with default settings.
This kind of thing will work okay for small use case examples that are about a second or so, but I loose data if I make the tone too long. Not a big deal with this script, but if I really get into this it will become a problem. I am sure that there is a way to address this, must have something to do with the chuck size with the nodejs script maybe. However in any case regardless if I address this or not there are other ways of doing this.
Thus far I have found that if I want to make a long noise I will want to create a large file, and then play that file with aplay. One way to do this would be to make use of redirection.
The above script that I started out with will work just fine when it comes to using redirection rather than piping. However it would be nice to work out a script that will work okay when it comes to generating data in real time that will then in turn be consumed by aplay. The problem with this is that doing so is a little tricky. I managed to work out something that seems to hold up okay, but this is very much a kind of place holder script that I have in place for now until I lean more about how to work with streams.
The tricky part with this is making sure that the rate at which the script is generating sample data is on target with the rate at which aplay is consuming this data. Doing so in a way that will not result in underrun, or memory leakage is indeed the tricky part. If the script does not generate data fast enough that will result in an underrun condition in which aplay runs out of data to play resulting in an interuption, however if the rate is to high that will result in problems with eating up memory and drain event related issues.
A script that I have worked out that seems to work okay so far addresses this problem by setting a fast, or slow rate that is used when calling the setTimeout method. Yes this is indeed crude, and over the log run I still run into the occasional problem where the stream needs to drain and I loose audio for a bit. However if I set the fast rate high enough it takes a long time for this to happen, while still avoiding the under run problem at least for the most part.
For the most part it would seem that this works okay thus far. When the script first starts I do get an underrun from aplay, and I also also sure that if I let this run long enough it will still need to drain. However until I get a better idea of how to manage this, I would have to go with some code like this.
At least I do understand what the problem is, and it is just the nature of streams which is what I am dealing with. What would be nice is to have a way to monitor what the current state of the standard output stream is, and throttle the rate at which I am generating sample data up and down as needed long before the high water mark is reached.
I am thinking that one way that I might want to get into creating sample data for aplay would involve writing scripts that generate JSON data in the form of arguments that will be used to call a funciton on a frame by frame basis. So then I would have code that generates this JSON data, and then I would also have code that will convert this JSON data to binary data that will then be used to create files that in turn can be played by way of the aplay command.
An exmaple of some json data that will create one second of sound would then look like this:
The script that I then use to convert this json data to binary data will then maybe look somehting like this:
I would then call the script and redirect the output into a file. I can then play the file with aplay.
The aplay command is then pretty cool as it allows for me to make noise with any kind of binary data, and there might be forms of data that will result in interesting sounds when it comes to just going ahead and using it that way. However if you do know a thing or two about a programming language or two there is doing all kinds of things with everything and anything that there is to work with in that programming environment to generate binary data to then pipe into aplay.