- Audacity is a high quality, free sound editing program that has all the features you are likely to need. http://audacity.sourceforge.net/
Sound editing will be covered in a later lesson.
There are also other low cost programs available for sound editing that will do a fine job for most of your needs.
Tuesday, September 28, 2010
Audacity
Cool-Edit 2000
- A much greater degree of control and more extensive features for manipulating sound files are available in sophisticated sound applications such as Cool Edit 2000 (no longer in production but currently in our lab), Adobe Audition, and Sound Forge. The main features needed are selecting and cropping a sound as you might a photo, normalizing a sound as you might adjust the brightness of a photo, and saving the sound in a variety of formats commonly supported by multimedia applications and WWW browsers.
The Edit / Trim menu will crop (or trim) a selected portion of the sound. File / Save as... offers a wide variety of options including .AU .WAV, .RA (RealAudio), and .Mp3. Mp3 format is widely accessible and uses an excellent compression CODEC (compression-decompression) to make file size smaller while maintaining very good sound quality.
Sound Recorder
- One of the oldest ways to make a recording using Windows' built-in tools is to open the entertainment accessory, (start / all programs / accessories / entertainment) Sound Recorder. Click on the red recording dot to begin. Save the file in .WAV format using File / Save, and be sure to select File / New to begin another recording from scratch. Under the menu (Edit / Audio Properties / Customize) it is possible to select a particular frequency and resolution other than the old default 22,050 Hz 8 bit mono. The best frequency and resolution are always a tradeoff between quality and size. Music is more demanding than voice. We recommend using at least 22,000 Hz 16 bit for voice recording for projects in this class. However, we absolutely do NOT recommend using Sound Recorder at all! You can download free software that is infinitely better.
AU Samples | ||
44,000 Hz 16 bit 110 KB | 44,000 Hz 8 bit 90 KB | |
22,000 Hz 16 bit 61 KB | 22,000 Hz 8 bit 45 KB | |
8,000 Hz 16 bit 16 KB | 8,000 Hz 8 bit 13 KB |
Windows built-in multimedia accessories
The sound card in a computer can create sound, process sound from input sources, and send sound to output destinations. A sound recorder may capture sound from a microphone (mic in) or from an audio device (line in). It is possible to control which source will be used by opening the entertainment accessory called volume control (start / all programs / accessories / entertainment). This utility controls the output or playback volume of a number of devices (microphone, line in, CD audio, midi, wave, etc.).
By resetting the options/properties from playback to recording, the input source (mic, line in, CD, midi) may be selected. If you are recording from a microphone, you may need to check that the mic is selected here, that the mic volume is adequate but not too loud, and, perhaps if needed, that the microphone is boosted (under Advanced properties). You have to play with it to see what works for your current recording conditions.
By resetting the options/properties from playback to recording, the input source (mic, line in, CD, midi) may be selected. If you are recording from a microphone, you may need to check that the mic is selected here, that the mic volume is adequate but not too loud, and, perhaps if needed, that the microphone is boosted (under Advanced properties). You have to play with it to see what works for your current recording conditions.
Digital vs. analog
Sound is made when objects vibrate producing pressure waves that can be picked up by our ears. These waves can be captured when they vibrate the membrane of a microphone and can be re-created by the amplified vibrations of the membrane in a speaker. If we graph the intensity of this wave or of the motion of the microphone membrane over time, we will get a smooth waveform curve in which the frequency of a sound is the number of peaks per second (Hertz = cycles per second). The distance between two peaks is the wavelength. As with most physical properties (e.g. temperature, pressure, velocity, the shape of the groove of an LP), this is an analog signal. In other words, between any two moments in time we might make infinite number of different measurements of intensity, between any two points on the scale there can be an infinite number of different values. Although the graph shown below is very simple, waveforms (a graph of intensity) can be quite complex because many different frequencies are usually present simultaneously (a spectral graph will show the distribution of frequencies in a sound).
the simple stuff
Sound fills the air all around us, and our ear is working all the time to filter out unimportant sound from our consciousness so we can pay attention to the important information. This means that we are largely unaware of how much sound surrounds us at all times. If you concentrate on what you hear, you will find that silence is very hard to find. A microphone does not have an intelligent filter, so it will pick up everything (depending on how sensitive it is). By carefully preparing your recording environment, you can eliminate a lot of the background noise that might otherwise spoil your soundtrack. Getting a clean recording at the outset will mean less work trying to fix problems later.
Your microphone should be suited to your recording equipment and to the sort of recording you are doing. Poor quality equipment or a noisy environment will introduce noise into your recording, interfering with the signal that you are trying to capture. Basically, signal is what you want, noise is the sound you don't want. An omni-directional mic picks up sound from all directions and is good for recording background noise, music from several directions, or several people talking. A uni-directional mic is pointed at the sound coming from a single source - one speaker, one instrument, etc. Clip-on mics can do a great job recording individuals in conversations because they are placed so close to the person's mouth, but if you have more than one or two participants, you may need to think about mixing sounds from multiple mics to control different sound levels. Table mics are nice for several people sitting around a table but will pick up the sound of someone touching or bumping into the table, shuffling papers, etc., so subjects need to take care if you use these. Microphones that are built into a tape recorder or camcorder tend to pick up machine vibrations from motors (more noise) and are difficult to place close enough to a subject. A long microphone cable may act as an antenna that could pick up static from fluorescent lighting or even a local radio station.
If a microphone's specifications include "signal to noise ratio", a high signal to noise ratio will sound best. Another way to improve your microphone's sound is to use a pre-amp (see below).
Your microphone should be suited to your recording equipment and to the sort of recording you are doing. Poor quality equipment or a noisy environment will introduce noise into your recording, interfering with the signal that you are trying to capture. Basically, signal is what you want, noise is the sound you don't want. An omni-directional mic picks up sound from all directions and is good for recording background noise, music from several directions, or several people talking. A uni-directional mic is pointed at the sound coming from a single source - one speaker, one instrument, etc. Clip-on mics can do a great job recording individuals in conversations because they are placed so close to the person's mouth, but if you have more than one or two participants, you may need to think about mixing sounds from multiple mics to control different sound levels. Table mics are nice for several people sitting around a table but will pick up the sound of someone touching or bumping into the table, shuffling papers, etc., so subjects need to take care if you use these. Microphones that are built into a tape recorder or camcorder tend to pick up machine vibrations from motors (more noise) and are difficult to place close enough to a subject. A long microphone cable may act as an antenna that could pick up static from fluorescent lighting or even a local radio station.
If a microphone's specifications include "signal to noise ratio", a high signal to noise ratio will sound best. Another way to improve your microphone's sound is to use a pre-amp (see below).
Digital Sound Recording
Incorporating sound in a presentation or WWW page involves preparing the original sound and digitizing it in an appropriate format. Because we tend to be visually oriented, sound production can be harder to do well than visual production, so you'll probably need to spend more time on it that you might think.
Digital signals
A digital representation expresses the pressure wave-form as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as microprocessors and computers. Although such a conversion can be prone to loss, most modern audio systems use this approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.[
Audio signal processing
Audio signal processing, sometimes referred to as audio processing, is the intentional alteration of auditory signals, or sound. As audio signals may be electronically represented in either digital or analog format, signal processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on the binary representation of that signal.
References
1. ^ a b c d Fine, Thomas (2008). Barry R. Ashpole. ed. "The Dawn of Commercial Digital Recording". ARSC Journal (Ted P. Sheldon). http://www.aes.org/aeshc/pdf/fine_dawn-of-digital.pdf. Retrieved 2010-05-02.
* Borwick, John, ed., 1994: Sound Recording Practice (Oxford: Oxford University Press)
* Ifeachor, Emmanuel C., and Jervis, Barrie W., 2002: Digital Signal Processing: A Practical Approach (Harlow, England: Pearson Education Limited)
* Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.)
* Watkinson, John, 1994: The Art of Digital Audio (Oxford: Focal Press)
* Bosi, Marina, and Goldberg, Richard E., 2003: Introduction to Digital Audio Coding and Standards (Springer)
* Borwick, John, ed., 1994: Sound Recording Practice (Oxford: Oxford University Press)
* Ifeachor, Emmanuel C., and Jervis, Barrie W., 2002: Digital Signal Processing: A Practical Approach (Harlow, England: Pearson Education Limited)
* Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.)
* Watkinson, John, 1994: The Art of Digital Audio (Oxford: Focal Press)
* Bosi, Marina, and Goldberg, Richard E., 2003: Introduction to Digital Audio Coding and Standards (Springer)
Digital audio interfaces
Audio-specific interfaces include:
* AC'97 (Audio Codec 1997) interface between Integrated circuits on PC motherboards
* Intel High Definition Audio A modern replacement for AC'97
* ADAT interface
* AES/EBU interface with XLR connectors
* AES47, Professional AES3 digital audio over Asynchronous Transfer Mode networks
* I²S (Inter-IC sound) interface between Integrated circuits in consumer electronics
* MADI Multichannel Audio Digital Interface
* MIDI low-bandwidth interconnect for carrying instrument data; cannot carry sound but can carry digital sample data in non-realtime
* S/PDIF, either over coaxial cable or TOSLINK
* TDIF, TASCAM proprietary format with D-sub cable
* A2DP via Bluetooth
Naturally, any digital bus (e.g., USB, FireWire, and PCI) can carry digital audio. Also, several interfaces are engineered to carry digital video and audio together, including HDMI and DisplayPort.
* AC'97 (Audio Codec 1997) interface between Integrated circuits on PC motherboards
* Intel High Definition Audio A modern replacement for AC'97
* ADAT interface
* AES/EBU interface with XLR connectors
* AES47, Professional AES3 digital audio over Asynchronous Transfer Mode networks
* I²S (Inter-IC sound) interface between Integrated circuits in consumer electronics
* MADI Multichannel Audio Digital Interface
* MIDI low-bandwidth interconnect for carrying instrument data; cannot carry sound but can carry digital sample data in non-realtime
* S/PDIF, either over coaxial cable or TOSLINK
* TDIF, TASCAM proprietary format with D-sub cable
* A2DP via Bluetooth
Naturally, any digital bus (e.g., USB, FireWire, and PCI) can carry digital audio. Also, several interfaces are engineered to carry digital video and audio together, including HDMI and DisplayPort.
Digital Audio Broadcasting
Digital Audio Broadcasting (DAB), is a digital radio technology for broadcasting radio stations, used in several countries, particularly in Europe. As of 2006, approximately 1,000 stations worldwide broadcast in the DAB format.[1]
The DAB standard was initiated as a European research project in the 1980s,[2] and the BBC launched the first DAB digital radio in 1995.[3] DAB receivers have been available in many countries since the end of the nineties. DAB may offer more radio programmes over a specific spectrum than analogue FM radio. DAB is more robust with regard to noise and multipath fading for mobile listening, since DAB reception quality first degrades rapidly when the signal strength falls below a critical threshold, whereas FM reception quality degrades slowly with the decreasing signal.
An "informal listening test" by Professor Sverre Holm has shown that for stationary listening the audio quality on DAB is lower than FM stereo, due to most stations using a bit rate of 128 kbit/s or less, with the MP2 audio codec, which requires 160 kbit/s to achieve perceived FM quality. 128 kbit/s gives better dynamic range or signal-to-noise ratio than FM radio, but a more smeared stereo image, and an upper cutoff frequency of 14 kHz, corresponding to 15 kHz of FM radio.[4] However, "CD sound quality" with MP2 is possible "with 256..192 kbps".[5]
An upgraded version of the system was released in February 2007, which is called DAB+. DAB is not forward compatible with DAB+, which means that DAB-only receivers will not be able to receive DAB+ broadcasts.[6] DAB+ is approximately twice as efficient as DAB due to the adoption of the AAC+ audio codec, and DAB+ can provide high quality audio with as low as 64kbit/s.[7] Reception quality will also be more robust on DAB+ than on DAB due to the addition of Reed-Solomon error correction coding.
More than 20 countries provide DAB transmissions, and several countries, such as Australia, Italy, Malta and Switzerland, have started transmitting DAB+ stations. See Countries using DAB/DMB. However, DAB radio has still not replaced the old FM system in popularity.
The DAB standard was initiated as a European research project in the 1980s,[2] and the BBC launched the first DAB digital radio in 1995.[3] DAB receivers have been available in many countries since the end of the nineties. DAB may offer more radio programmes over a specific spectrum than analogue FM radio. DAB is more robust with regard to noise and multipath fading for mobile listening, since DAB reception quality first degrades rapidly when the signal strength falls below a critical threshold, whereas FM reception quality degrades slowly with the decreasing signal.
An "informal listening test" by Professor Sverre Holm has shown that for stationary listening the audio quality on DAB is lower than FM stereo, due to most stations using a bit rate of 128 kbit/s or less, with the MP2 audio codec, which requires 160 kbit/s to achieve perceived FM quality. 128 kbit/s gives better dynamic range or signal-to-noise ratio than FM radio, but a more smeared stereo image, and an upper cutoff frequency of 14 kHz, corresponding to 15 kHz of FM radio.[4] However, "CD sound quality" with MP2 is possible "with 256..192 kbps".[5]
An upgraded version of the system was released in February 2007, which is called DAB+. DAB is not forward compatible with DAB+, which means that DAB-only receivers will not be able to receive DAB+ broadcasts.[6] DAB+ is approximately twice as efficient as DAB due to the adoption of the AAC+ audio codec, and DAB+ can provide high quality audio with as low as 64kbit/s.[7] Reception quality will also be more robust on DAB+ than on DAB due to the addition of Reed-Solomon error correction coding.
More than 20 countries provide DAB transmissions, and several countries, such as Australia, Italy, Malta and Switzerland, have started transmitting DAB+ stations. See Countries using DAB/DMB. However, DAB radio has still not replaced the old FM system in popularity.
Conversion process
A digital audio system starts with an ADC that converts an analog signal to a digital signal.[note 1] The ADC runs at a sampling rate and converts at a known bit resolution. For example, CD audio has a sampling rate of 44.1 kHz (44,100 samples per second) and 16-bit resolution for each channel. For stereo there are two channels: 'left' and 'right'. If the analog signal is not already bandlimited then an anti-aliasing filter is necessary before conversion, to prevent aliasing in the digital signal. (Aliasing occurs when frequencies above the Nyquist frequency have not been band limited, and instead appear as audible artifacts in the lower frequencies).
The digital audio signal may be stored or transmitted. Digital audio storage can be on a CD, a digital audio player, a hard drive, USB flash drive, CompactFlash, or any other digital data storage device. The digital signal may then be altered in a process which is called digital signal processing where it may be filtered or have effects applied. Audio data compression techniques — such as MP3, Advanced Audio Coding, Ogg Vorbis, or FLAC — are commonly employed to reduce the file size. Digital audio can be streamed to other devices.
The last step is for digital audio to be converted back to an analog signal with a DAC. Like ADCs, DACs run at a specific sampling rate and bit resolution but through the processes of oversampling, upsampling, and downsampling, this sampling rate may not be the same as the initial sampling rate.
The digital audio signal may be stored or transmitted. Digital audio storage can be on a CD, a digital audio player, a hard drive, USB flash drive, CompactFlash, or any other digital data storage device. The digital signal may then be altered in a process which is called digital signal processing where it may be filtered or have effects applied. Audio data compression techniques — such as MP3, Advanced Audio Coding, Ogg Vorbis, or FLAC — are commonly employed to reduce the file size. Digital audio can be streamed to other devices.
The last step is for digital audio to be converted back to an analog signal with a DAC. Like ADCs, DACs run at a specific sampling rate and bit resolution but through the processes of oversampling, upsampling, and downsampling, this sampling rate may not be the same as the initial sampling rate.
Overview of digital audio
Digital audio has emerged because of its usefulness in the recording, manipulation, mass-production, and distribution of sound. Modern distribution of music across the Internet via on-line stores depends on digital recording and digital compression algorithms. Distribution of audio as data files rather than as physical objects has significantly reduced the cost of distribution.
In an analog audio system, sounds begin as physical waveforms in the air, are transformed into an electrical representation of the waveform, via a transducer (for example, a microphone), and are stored or transmitted. To be re-created into sound, the process is reversed, through amplification and then conversion back into physical waveforms via a loudspeaker. Although its nature may change, analog audio's fundamental wave-like characteristics remain the same during its storage, transformation, duplication, and amplification.
Analog audio signals are susceptible to noise and distortion, unavoidable due to the innate characteristics of electronic circuits and associated devices. In the case of purely analog recording and reproduction, numerous opportunities for the introduction of noise and distortion exist throughout the entire process. When audio is digitized, distortion and noise are introduced only by the stages that precede conversion to digital format, and by the stages that follow conversion back to analog.
The digital audio chain begins when an analog audio signal is first sampled, and then (for pulse-code modulation, the usual form of digital audio) it is converted into binary signals—‘on/off’ pulses—which are stored as binary electronic, magnetic, or optical signals, rather than as continuous time, continuous level electronic or electromechanical signals. This signal may then be further encoded to allow correction of any errors that might occur in the storage or transmission of the signal, however this encoding is for error correction, and is not strictly part of the digital audio process. This "channel coding" is essential to the ability of broadcast or recorded digital system to avoid loss of bit accuracy. The discrete time and level of the binary signal allow a decoder to recreate the analog signal upon replay. An example of a channel code is Eight to Fourteen Bit Modulation as used in the audio Compact Disc (CD).
Analog audio signals are susceptible to noise and distortion, unavoidable due to the innate characteristics of electronic circuits and associated devices. In the case of purely analog recording and reproduction, numerous opportunities for the introduction of noise and distortion exist throughout the entire process. When audio is digitized, distortion and noise are introduced only by the stages that precede conversion to digital format, and by the stages that follow conversion back to analog.
The digital audio chain begins when an analog audio signal is first sampled, and then (for pulse-code modulation, the usual form of digital audio) it is converted into binary signals—‘on/off’ pulses—which are stored as binary electronic, magnetic, or optical signals, rather than as continuous time, continuous level electronic or electromechanical signals. This signal may then be further encoded to allow correction of any errors that might occur in the storage or transmission of the signal, however this encoding is for error correction, and is not strictly part of the digital audio process. This "channel coding" is essential to the ability of broadcast or recorded digital system to avoid loss of bit accuracy. The discrete time and level of the binary signal allow a decoder to recreate the analog signal upon replay. An example of a channel code is Eight to Fourteen Bit Modulation as used in the audio Compact Disc (CD).
Subscribe to:
Posts (Atom)