Sound engineering as we know it today has a long history behind it dating back to the early 1960's when mixing consoles started to form themselves into the shape we recognize them today.
Computer audio took a while to catch up though. At first, computer-type people thought they could simply walk all over established sound engineering traditions and principles. They got it all wrong of course and some pretty awful sound was the result (as was pretty awful video, pretty awful print publishing and anything else that computer people thought they could easily understand, but couldn't).
But things moved on and software developers realized that experts outside of computing were just that - they already knew what they were doing. So the developers started to take the extra time and trouble to analyze their working practices, and adapt their computer systems to what people really needed and wanted. Apple's Final Cut Pro is an excellent example of that outside of audio.
But now there is a new wave forming. This is where computer audio can go beyond the confines of traditional audio. Computers, particularly networked computers, can do so much more than conventional sound systems. This calls for new techniques, and at last it demands that sound people adapt to the changes that computers bring. For genuine reasons, and not just learning the latest interface or operating system.
An example of this is in Macromedia's Flash software. Flash, as you know, is a browser plug-in that can handle text, video, audio and animation. Currently its principal application is animation, but it is capable of so much more.
In Flash there is a timeline, just as in audio editing software. But the timeline doesn't necessarily flow in one direction only. A Flash movie can skip back and forth along the timeline according to interactions with the user.
So clicking a button at some point could cause the Flash player to skip to a point in the timeline that is later, earlier, or perhaps even inaccessible other than by clicking that button. And the audio has to work with all of this.
There are three basic types of sound in Flash. The most basic is perhaps the 'stream' sound. This is a sound that will start at a certain point on the timeline and carry on playing until a new 'keyframe' is reached, or an instruction is given to stop playback.
Flash divides the sound file into 'subclips' and embeds each one into an individual frame of the movie. This allows the sound to synchronize to the action or animation. As its title suggests, a stream sound doesn't have to load completely to start playback. It can start playing after a few frames have been downloaded.
Another type of sound is the 'event' sound. Event sounds are in a sense independent of the timeline. They play when initiated by a keyframe, but they will continue to play regardless of what else happens, even the end of the timeline being reached. Event sounds can also be initiated by the user clicking a button, which is great for feedback and interactivity. If a movie loops over the keyframe that triggers an event sound, then the sound will be triggered repeatedly.
'Start' sounds are like event sounds, but if a start sound is already playing, it can't be triggered again. The difference is subtle, but in the natural course of creation of a Flash movie, the difference in requirements would occur spontaneously.
There is an interesting point here. In professional video it is common for sound to be handled separately from the pictures, by specialists. But the creators of Flash movies are often expected to be able to handle everything themselves. Inevitably this leads to a degradation in the quality of the sound as the movie creator's attention is mostly on the animation. But that could change - it requires that sound specialists immerse themselves in Flash to a point where they can add significant value to the audio, and then promote their skills.
This is a genuinely new opportunity. If you want to prepare for the future, get working with Flash right now.Come on the FREE COURSE TOUR
Our foundation-level course with knowledge content covering all aspects of recording. Twelve modules with hundreds of audio example files and twelve practical assignment projects covering a wide range of studio techniques including recording, mixing and mastering. Includes guidance on how to create an industry-standard showreel. Learn more...
The twelve modules of this course cover the basic controls and functions of the compressor, stereo linking, side chain operation including de-essing, transient shaping and control, including dynamic range control, enhancement of instruments and voices, and compression and limiting of a completed mix. Learn more...
This course covers the principles of MIDI, synthesis and sampling that can be applied in any DAW, any synthesizer, and any sampler.The course covers principles that can be applied to all DAWs, synthesizers and samplers so that students can work comfortably with any software or hardware with such functions. Learn more...
Great home recording starts with a great home recording studio. It doesn't need to be expensive if you know how to select the right equipment for your needs.