What level of background noise is acceptable in a recording?
What exactly does the phrase 'leave headroom for mastering' mean?
What is production? Part 3: Recording
Three types of musician you'll prefer to work with in the studio, and one type that you won't
Demonstrating the Waves J37 analog tape emulation plug-in and comparison with a real tape recorder
A brief introduction to soundproofing
How to record or amplify the melodica or any unfamiliar instrument
The professional way to make sure your mics are connected correctly
A simple mixing tip that will improve (nearly) all of your mixes
Who should be responsible for the fade at the end of a song - the producer, mix engineer or mastering engineer?
Subscribe to access our latest, up-to-the-minute articles with hints, tips and adventures in audio in the weekly Audio Masterclass Newsletter.
Sound engineering as we know it today has a long history behind it dating back to the early 1960's when mixing consoles started to form themselves into the shape we recognize them today.
Computer audio took a while to catch up though. At first, computer-type people thought they could simply walk all over established sound engineering traditions and principles. They got it all wrong of course and some pretty awful sound was the result (as was pretty awful video, pretty awful print publishing and anything else that computer people thought they could easily understand, but couldn't).
But things moved on and software developers realized that experts outside of computing were just that - they already knew what they were doing. So the developers started to take the extra time and trouble to analyze their working practices, and adapt their computer systems to what people really needed and wanted. Apple's Final Cut Pro is an excellent example of that outside of audio.
But now there is a new wave forming. This is where computer audio can go beyond the confines of traditional audio. Computers, particularly networked computers, can do so much more than conventional sound systems. This calls for new techniques, and at last it demands that sound people adapt to the changes that computers bring. For genuine reasons, and not just learning the latest interface or operating system.
An example of this is in Macromedia's Flash software. Flash, as you know, is a browser plug-in that can handle text, video, audio and animation. Currently its principal application is animation, but it is capable of so much more.
In Flash there is a timeline, just as in audio editing software. But the timeline doesn't necessarily flow in one direction only. A Flash movie can skip back and forth along the timeline according to interactions with the user.
So clicking a button at some point could cause the Flash player to skip to a point in the timeline that is later, earlier, or perhaps even inaccessible other than by clicking that button. And the audio has to work with all of this.
There are three basic types of sound in Flash. The most basic is perhaps the 'stream' sound. This is a sound that will start at a certain point on the timeline and carry on playing until a new 'keyframe' is reached, or an instruction is given to stop playback.
Flash divides the sound file into 'subclips' and embeds each one into an individual frame of the movie. This allows the sound to synchronize to the action or animation. As its title suggests, a stream sound doesn't have to load completely to start playback. It can start playing after a few frames have been downloaded.
Another type of sound is the 'event' sound. Event sounds are in a sense independent of the timeline. They play when initiated by a keyframe, but they will continue to play regardless of what else happens, even the end of the timeline being reached. Event sounds can also be initiated by the user clicking a button, which is great for feedback and interactivity. If a movie loops over the keyframe that triggers an event sound, then the sound will be triggered repeatedly.
'Start' sounds are like event sounds, but if a start sound is already playing, it can't be triggered again. The difference is subtle, but in the natural course of creation of a Flash movie, the difference in requirements would occur spontaneously.
There is an interesting point here. In professional video it is common for sound to be handled separately from the pictures, by specialists. But the creators of Flash movies are often expected to be able to handle everything themselves. Inevitably this leads to a degradation in the quality of the sound as the movie creator's attention is mostly on the animation. But that could change - it requires that sound specialists immerse themselves in Flash to a point where they can add significant value to the audio, and then promote their skills.
This is a genuinely new opportunity. If you want to prepare for the future, get working with Flash right now.Come on the FREE COURSE TOUR