Often called “sound,” audio refers to sound waves and their corresponding electrical currents. These waves travel through the air and water. When recording sound, a sample rate is used to capture the wave in digital form. This sample rate is usually measured in millvolts, but can be measured in volts RMS (root mean square).
When a sound wave is recorded, it is converted into digital form by a circuit. Each sample represents a moment in the waveform. For example, an audio CD contains four bytes per frame, each representing a sample of a sound wave.
The waveform of music is usually more complex than other audio samples. This is because music requires accurate reproduction. It also means there are no harsh noises or distortions.
Compression is a process used to balance the dynamics of a song or dialogue. This can keep a solo speaker from getting lost or shouting. It can also fix problems such as whispering.
Compression uses several parameters. These parameters determine how long the signal is compressed, how much it is compressed, and when it is released. These parameters can be changed to keep the compression going and to achieve overall balance.
The attack time of the signal determines how long it takes to compress. The release time is the length of time the compressed signal returns. Using shorter attack and release times will keep the compression going while using longer times can result in a loss of overall volume.
In addition to the parameters above, some codecs also support special limited frequency bandwidth channels. These can be configured by establishing upper and lower frequency limits.